
Karl Friston: a neuroscientist who might hold the key to true AI - MKais
https://www.wired.com/story/karl-friston-free-energy-principle-artificial-intelligence
======
atschantz
It's worth noting that 'free energy' is just the 'evidence lower bound' that
is optimized by a large portion of today's machine learning algorithms (i.e.
variational auto-encoders).

It's also worth noting that 'predictive coding' \- a dominant paradigm in
neuroscience - is a form of free energy minimization.

Moreover, free energy minimization (as predictive coding) approximates the
backpropagation algorithm [1], but in a biologically plausible fashion. In
fact, most biologically plausible deep learning approaches use some form of
prediction error signal, and are therefore functionally akin to predictive
coding.

Which is all just to say that the notion of free energy minimization is
somewhat commonplace in both neuroscience and machine learning.

[1]
[https://www.ncbi.nlm.nih.gov/pubmed/28333583](https://www.ncbi.nlm.nih.gov/pubmed/28333583)

------
mafribe
Clickbait article.

It is noteworthy that Friston has, as of November 2018, neither (1) formalised
_free energy minimisation_ (FIM) with sufficient precision that it goes beyond
a vague research heuristic, that can (and is) adapted in ad-hoc ways; nor (2)
come up with sufficient empirical evidence for his claim that FIM is how human
or animal brains works -- despite the recent revolution in our ability to
measure live neurons, and despite having been asked (in private) by working
neuro-scientist, including at his university.

~~~
varjag
Agreed. No technical detail means quackery, something the field always had in
abundance.

~~~
w_t_payne
Oh boy, yes. Neuroscience is _all_ about serving the egos of the scientists,
and it can often be anything but scientific.

(Although, as a failed vision scientist myself, I may be credibly accused of
some disqualifying bias in this regard).

------
jamii
Notes from the last time I tried to understand this -
[https://www.lesswrong.com/posts/wpZJvgQ4HvJE2bysy/god-
help-u...](https://www.lesswrong.com/posts/wpZJvgQ4HvJE2bysy/god-help-us-let-
s-try-to-understand-friston-on-free-energy#Wh3HMLbd7Xyhh2LNw)

~~~
sydd
Thanks, it was a great read!

From what I get this whole thing is more like an abstract ruleset describing
how decision making in the brain works, rather than a brain model. Or am I
wrong, is there anyone who built a network model based on this theory?

~~~
atschantz
In terms of the free energy 'principle', it makes no predictions about how
free energy minimized. But there have been multiple process theories
suggested, most notably predictive coding (which is a dominant paradigm in
neuroscience) [1] and variational message passing [2].

[1]
[https://en.wikipedia.org/wiki/Predictive_coding](https://en.wikipedia.org/wiki/Predictive_coding)
[2]
[http://www.jmlr.org/papers/volume6/winn05a/winn05a.pdf](http://www.jmlr.org/papers/volume6/winn05a/winn05a.pdf)

~~~
eli_gottlieb
Isn't variational message-passing the algorithmic-level theory about where
predictive coding comes from?

~~~
atschantz
I think you might be right, a quote from Friston on the relationship (in
reference to belief propagation):

"We turn to the equivalent message passing for continuous variables, which
transpires to be predictive coding [...]"

It could be that belief propagation is in the context of discrete variables,
whereas predictive coding is in the context of continuous, both of which are a
form of (variational) message passing.

------
marmaduke
I’ve seen Friston speak a few times. My favorite quote along these lines is
that “your arm moves because you predict it will, and your motor system seeks
to minimize prediction error.”

He’s been a huge figure in human neuroscience, bringing statistics to all
those psychologists with fMRI scanners

------
ArtWomb
Most grad-level Deep Learning classes have a week or so devoted to
"Approximate Bayes" methods. And it's conceivable future updates to all
popular probabilistic programming languages will include "programmable" rather
than "fixed-function" inference methods.

"Inference Metaprogramming" paper

[https://people.csail.mit.edu/rinard/paper/pldi18.pdf](https://people.csail.mit.edu/rinard/paper/pldi18.pdf)

Latest state-of-the-art research will be presented at upcoming NeuroIPS
conference

Symposium on Advances in Approximate Bayesian Inference

[http://approximateinference.org/accepted/](http://approximateinference.org/accepted/)

I think the most fascinating aspect is that Friston and his team are working
within the field of Computational and Algorithmic Psychiatry. I mean this pre-
print is really interesting: using video game play to diagnose disorder.

Active Inference in OpenAI Gym: A Paradigm for Computational Investigations
Into Psychiatric Illness

[https://www.biologicalpsychiatrycnni.org/article/S2451-9022(...](https://www.biologicalpsychiatrycnni.org/article/S2451-9022\(18\)30161-7/pdf)

~~~
sixdimensional
Since you mention Bayesian methods, I thought I may randomly ask you - have
you come across any good work about applications of subjective Bayesian
statistics in AI?

I was particularly interested in subjective Bayes theory due to the way it
seems to interleave human input with mathematical theory.

I first learned about it from a non-fiction book in which these techniques
were used by scientists in the US to locate Russian ICBMs that were test-fired
during the Cold War and landed in the ocean. The wisdom of experts was
quantified and fed into a simple Bayesian subjective probability calculation
which lead to prioritization of target areas to investigate and the US located
on either the first or second try - I can't recall. I've seen a few other
interesting applications of this as well.

I'm not an expert in this area, but you sound like you might be - so I thought
I'd take the change to ask :)

~~~
Chris_Jay
I'm not an expert either, but you might be interested in the book
'Superforecasting' by Tetlock and Gardner - they have done some (IMHO) very
interesting research on predictions markets. It might be the kind of thing
you're looking to research more of!

~~~
sixdimensional
Thanks for the recommendation - that does look very interesting indeed! I will
find myself a copy and have a read. Hacker News book club comes through
again!! :)

------
paraschopra
For anyone who is interested in a tutorial and actual implementation of active
inference (an idea based on Free Energy Principle), here's one in Python
[https://kaiu.me/2017/07/11/introducing-the-deep-active-
infer...](https://kaiu.me/2017/07/11/introducing-the-deep-active-inference-
agent/)

I have been trying to understand FEP, and so far my understanding is that
essentially the agent tries to learn the generative model that most closely
explains observations and then tries to act in a way that are more likely to
cause the environment to generate its preferred observations (say pH and
temperature in the right range).

The problem with this approach is in scalability of inference and candidate
model generation. By the time you provide model for the agent, you as a
designer have coded much of your knowledge already and hence constrain the
agent. True AI will build model from the scratch, and not just learn model
complexity.

~~~
eli_gottlieb
>True AI will build model from the scratch, and not just learn model
complexity.

There's no such thing as truly learning "from scratch" \-- the No Free Lunch
Theorem holds no matter what. What you _can_ do is find a sufficiently large
(ex: Turing-complete) hypothesis class, and make simplifying assumptions to
allow it to be feasibly learnable (such as regularization or priors).

~~~
antidesitter
The No Free Lunch Theorem is irrelevant to the real world [0][1]. It assumes
all functions, even those with infinite algorithmic complexity, are equally
likely.

You should look into algorithmic probability for a better foundation.

[0]
[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.540....](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.540.7938)

[1] [https://arxiv.org/abs/1111.3846.pdf](https://arxiv.org/abs/1111.3846.pdf)

~~~
eli_gottlieb
On the one hand, yes, the No Free Lunch Theorem seems to intuitively rely on
the set-theoretic definition of functions, rather than building on a
constructive foundation to hypothesize that functions which are "physically
harder", in some sense, are less likely.

On the other hand, algorithmic probability requires first defining a Turing
machine, rendering the Solomonoff Measure defined only _up to_ a specific
programming language, which can bias it some arbitrary amount. That's on top
of the Solomonoff Measure itself being incomputable, and so utterly useless as
a foundation for real-world machine learning and computational cognitive
science.

I agree that positing a Bayesian prior on functions/programs/causal structures
gets you _around_ the No Free Lunch Theorem. The question just then ends up
being: what sort of hypothesis space, and what sort of prior, sufficiently
resemble the real world (the data-generating process) to allow for learning
from a given data set? That's a matter of science.

------
jimduk
The complex systems people used to discuss the problem of agents with internal
models making models of other agents [1].

Similarly biologists are interested in how a living thing 'organises itself'
in the world, maintains its structure and how its sensing and action is
coupled to the environment[2].

This sounds like a similar approach, however fuzzy. Isn't it just saying 'can
we look for principles that define how living creatures should organise the
effort (energy/information) it makes sense to put into "recognising/
predicting/ acting in / being in" the world?'

Makes sense there could be some shared mechanisms, though I'd personally be
surprised if they are universal, as differing life-forms seem suited for
differing levels of environmental change. This is something lots of people
have looked at (it's fun), and agree the Wired article doesn't give a clear
answer.

1\. Can't recall the paper, but think it was Doyne Farmer ( or Chris Langton?)
arguing that if your agent had complexity N, then you should spend sqrt(N)
complexity modelling another agent

2\. e.g. Maturana & Varela, summary of autopoeisis here
[http://supergoodtech.com/tomquick/phd/autopoiesis.html](http://supergoodtech.com/tomquick/phd/autopoiesis.html)
but I'm sure lots of other biologists have good theories

------
snrji
By no means will I be ever able to grasp Friston's theory, but the free energy
minimisation vaguely reminds me of Curiosity-driven reinforcement learning.
Can anyone with more understanding than me confirm or deny this apparent
resemblance?

~~~
paraschopra
There are similarities. The difference in two approaches are: \- FEP is
Bayesian in nature, while there's usually no notion of uncertainties in
curiosity driven RL \- In FEP, there's no explicit weighting of
explore/exploit tradeoff. It automatically emerges from equations \- FEP,
since it's Bayesian, allows for more complex reasoning (like counterfactuals)
\- Curiosity driven RL is scalable while FEP is not feasible for anything
other than simple models

~~~
snrji
Excuse me, another followup question (can't edit on mobile): can you ELI5 how
do exploitation and exploration "emerge" naturally instead of the tradeoff
being explicitly coded as in RL?

~~~
atschantz
As a general answer, the theory suggests that organisms maximize a quantity
known as model evidence, which is just a way of saying 'how much evidence does
some data provide for my model of the world?'

There are two complementary ways to maximize this - change your model or
change your world.

If we now grant that actions also maximize model evidence, then actions can
either be conducted to sample data that make the model a better fit of the
data (exploration), or they can be conducted to sample observations that are
consistent with the current model (exploitation).

~~~
snrji
And the optimization process itself would determine whether updating the model
or changing the world is optimal, I guess. Thanks.

------
m-i-l
If you liked this, you may also like "Am I autistic? An intellectual
autobiography"[0] by Karl Friston. It doesn't go into his free energy idea at
all, but is more about the person behind the idea. My favourite line "I
remember being asked [by an educational psychologist] whether I thought the
puppets in Thunderbirds ever got hungry".

[0]
[https://www.aliusresearch.org/uploads/9/1/6/0/91600416/frist...](https://www.aliusresearch.org/uploads/9/1/6/0/91600416/friston_-
_am_i_autistic_.pdf)

------
kken
The article is too long for the idea it tries to convey. I like to read to
broaden my mind, not for readings sake.

~~~
temperfidelis2x
The article tells a story; it's not meant for people trying to grasp technical
details. Furthermore, it can be argued that reading (well-written texts like
this) for readings sake also broadens your mind.

~~~
SubiculumCode
At least provide a summary. I mean, "After completing his medical studies,
Friston moved to Oxford and spent two years as a resident trainee at a
Victorian-era hospital called Littlemore. Founded under the 1845 Lunacy Act,
Littlemore had originally been instituted to help transfer all “pauper
lunatics” from workhouses to hospitals. By the mid-1980s, when Friston
arrived, it was one of the last of the old asylums on the outskirts of
England’s cities."

is a story.

But as a neuroscientist with an interest in machine learning, I want to know
the idea, not the history of Littlemore, attended by this scientist whose
tools and methods I have used(Friston motion parameters, I am looking at you).

~~~
temperfidelis2x
In that case the wikipedia article is probably a decent starting point to see
whether you are interested or not:
[https://en.wikipedia.org/wiki/Free_energy_principle](https://en.wikipedia.org/wiki/Free_energy_principle)

~~~
SubiculumCode
I'd agree but the "free energy principle" is first mentioned a good 1,000+
words into the article.

------
currymj
As far as I can tell the “free energy principle” is just asserting that the
brain is approximately Bayesian and is doing some kind of variational
inference, right? I’m not sure how revolutionary that is.

(I’m predisposed not to like Friston because his work in fMRI plays fast and
loose with the idea of “causality”.)

~~~
atschantz
The 'revolutionary' aspect is the suggestion that a single celled organism is
also doing variational inference. Or, more accurately, can be described as
such.

~~~
eli_gottlieb
The trouble is hitting the right "happy medium" between (variational)
inference as an explanation of the sensory and motor cortices, and variational
inference as a universal theory of everything.

------
damnson
Intelligence automatically emerging in nature is very likely and an obvious
requirement to humans rising to be the dominant species on earth. FEP makes it
seem like this is a new idea. How we think about the meta, and reconstruct
ideas from our own perspective has been embedded in human adaptation as long
as recorded history. To model "true" AI from a human perspective using FEP you
need to model AI from the initial frame of reference, where human intelligence
emerged automatically. This could perhaps be done through manipulating a
fundamental components of our brains or simulating scenarios where this could
of happened.

------
jchook
I found this interview with Karl Friston helpful to understand free energy
principle from a high level:

[https://www.youtube.com/watch?v=NIu_dJGyIQI](https://www.youtube.com/watch?v=NIu_dJGyIQI)

------
andrewfromx
I think true AI will not be a computer program that suddenly becomes human
like. It will be a human that becomes more and more cyborg like
[https://techcrunch.com/2018/11/01/thomas-reardon-and-ctrl-
la...](https://techcrunch.com/2018/11/01/thomas-reardon-and-ctrl-labs-are-
building-an-api-for-the-brain/) Soon humans will have more and more brain
surgery adding cyborg features to their non-artificial-intelligence, just
their natural-intelligence until at some point, they will be so machine like,
boom, AI.

~~~
amelius
Reminds me of the philosophical question of what happens to one's
consciousness if you'd replace their neurons, one at a time, by electronic
equivalents.

~~~
lurquer
It may be a fallacy to assume a neuron is less complex than a brain. Depends
on how one measures complexity and at what scale... but living systems --
unlike no living systems -- strangely get more complex the closer in one goes.
That is, it's fairly trivial to simulate an earthworm... it's trickier to
simulate the components of the earthworm.

~~~
klodolph
This is a completely minor point here... but "fallacy" means that there's
something wrong with the argument, if you have a disagreement about facts or
assumptions then the word "fallacy" doesn't really apply (you can just say
"wrong" instead).

~~~
lurquer
It suffers from the fallacy of petitio principii, in that it assumes arguendo
that consciousness is comprised of neurons (and, as mentioned above, that a
neuron is less complex than consciousness.) But it's not stated as an
'argument' in any case so perhaps the term fallacy was out-of-place.

~~~
klodolph
Petitio principii is when the premise assumes the truth of the conclusion, but
since there is no argument and no conclusion it's impossible for the statement
to suffer from that fallacy.

~~~
lurquer
You're right. Shouldn't have posted that. I just wanted the last word.
(Dammit... did it again!)

------
qwerty456127
> Friston found time for other pursuits as well. At age 19, he spent an entire
> school vacation trying to squeeze all of physics on one page. He failed but
> did manage to fit all of quantum mechanics.

Is the page available to read?

~~~
GrinningFool
[https://www.aliusresearch.org/uploads/9/1/6/0/91600416/frist...](https://www.aliusresearch.org/uploads/9/1/6/0/91600416/friston_-
_am_i_autistic_.pdf)

Page 6

Though not fully legible as captured in that pdf.

~~~
qwerty456127
Thanks but "not fully legible" means absolutely unreadable in this case. I've
tried zooming in yet couldn't recognize a single letter (those in the title
don't count).

------
w_t_payne
Thinking about this with my engineering hat on -- If I wanted to guide the
behaviour of such a system, I would have to influence the prediction somehow -
and then the system would act to change the state of the world to match that
prediction and/or update the prediction with more information about what is
actually going on (by actively making observations etc...). This seems like
quite an elegant and neat little lever for high-level control/objective
setting. A bit like a Picardian 'make it so' button...

------
calebm
This almost sounds like a special case of Jeremy England's dissipation-driven
adaptation theory. Does anyone know the overlap/differences between these
theories (other than specificity)?

~~~
poslathian
Ctrl-F England and here you are. I’ve been searching for someone more informed
than me that has compared and contrasted the two but haven’t found anything.

~~~
dr_dshiv
Second. Check out David Bohm's idea of wholeness and harmony, in "on
creativity." Not what you are looking for, but another puzzle piece with the
same scent

------
DanielleMolloy
Blog post about understanding Fristons ideas, also consider the amount of
reseacher comments it has provoked: [http://slatestarcodex.com/2018/03/04/god-
help-us-lets-try-to...](http://slatestarcodex.com/2018/03/04/god-help-us-lets-
try-to-understand-friston-on-free-energy/)

------
axilmar
Well, here is what I believe brains do:

[https://news.ycombinator.com/item?id=9022206](https://news.ycombinator.com/item?id=9022206)

It makes total sense for the brain's job to be minimizing surprise, because
minimizing surprise is the best and most basic strategy for survival.

~~~
toxik
With all due respect, one sentence explaining how you think the mind works
isn't really worth much. It doesn't amount to much more than "the brain tries
to explain reality." Yes, ok, but how do you translate that into some
algorithm? How does it relate to gradient descent methods on neural networks?

~~~
Rainymood
>With all due respect, one sentence explaining how you think the mind works
isn't really worth much.

With all due respect, one sentence _can_ be worth a lot.

Some examples:

> F = MA

Another

> E = MC^2

And another

> G_{\mu, \nu} = 8 \pi G (T_{P\mu, \nu} _ \rho_{\Lambda} g_{\mu, \mu})

Another example

>To be, or not to be; that is the question;

Et cetera, et cetera. The length of something does not necessarily imply that
an idea is weak, maybe the idea is really deep? Dismissing an idea based on
length is idiotic.

Sorry for the rant.

~~~
toxik
None of these mean anything without their respective context. In fact, they're
all pretty pedestrian taken as a single sentence.

------
hlyshkow
I see no reference to the dead salmon in the MR scanner showing correlative
activation via spam mapping. Those results somewhat having been a hinderance
to quite a few PET and fMRI researchers’ careers.

------
perpetualcrayon
What I got out of this, it appears in essence he's saying that "those
creatures with the most accurate picture of the world are the ones best
prepared to succeed in the world"?

------
DanielleMolloy
This is a brilliant portrait of Karl Friston. Thanks for sharing!

------
YeGoblynQueenne
>> He has an h-­index—a metric used to measure the impact of a researcher’s
publications—nearly twice the size of Albert Einstein’s.

That can only mean the h-index is a load of rubbish.

~~~
pygy_
Not really. The academic field grew tremendously in the mean time, so the
comparison is rubbish, but the index isn't.

It is defined for an author as the largest number N such that as then have N
articles with at least N citations.

------
perpetualcrayon
It seems he's an ontologist at heart.

[https://en.wikipedia.org/wiki/Ontology](https://en.wikipedia.org/wiki/Ontology)

------
conjectures
Anyone have a reference relating the free energy minimisation principle /
active inference to reinforcement learning type environments?

~~~
atschantz
The particular study cited in the article is [1], however for a more general
review of the links to reinforcement learning [2].

[1]
[https://www.biologicalpsychiatrycnni.org/article/S2451-9022(...](https://www.biologicalpsychiatrycnni.org/article/S2451-9022\(18\)30161-7/pdf)
[2]
[https://journals.plos.org/plosone/article?id=10.1371/journal...](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0006421)

~~~
conjectures
Cheers.

------
nuguy
This article is a complete waste. The title implies that it’s about ai but it
turns out to be a portrait of a mans life — a pr piece. Not only that but free
energy minimization has nothing to do with intelligence other than vaguely
describing one of its most obvious and superficial characteristics.

—-

Ai is the most important issue in the world. True general ai is an existential
threat to human kind. The economics of general ai lead to the extinction of
humans no matter how you slice it. Killer robots are just icing on the cake —
the tip of the iceberg.

General ai can be thought of as the keystone in the gateway of automation. It
allows the automation of the human mind itself. The ai we have now cannot do
this. Better ml algorithms will never threaten the human mind most likely. So
people have a very false and dangerous sense of security.

Ml experts eagerly correct people like me with a vague notion and wave of the
hand: ai won’t be a problem for a long time. As I said, ml is not a threat
(for being automation of human thought) and this is because ml has nothing to
do with human thought. Ml experts don’t know anything about human thought and
therefore a complete layman is just as qualified to speculate about general ai
as an ml expert is. Or a person with a physics degree or what have you. You
might say that laymen tend to be dumber, or some variation on that, but that’s
besides the point and irrelevant.

There are many reasons to be worried about the creation of general ai. First,
general ai is much more broad than it is given credit for — sentience has many
more forms than the human mind and is a broader attack surface than usually
thought. People imagine it as finding the human mind like a needle in a
haystack. It’s a lot easier than that. The algorithm for the kernel of
intelligence is probably relatively much simpler than one would initially
imagine. We don’t know when we might stumble on it. Or I could be wrong but
I’m still right because even if it’s very complex relatively, we will still
discover it if we try — and we are trying. As i said, ml isn’t a huge threat
for general ai and I think it’s very likely that brain research is the biggest
threat currently. The resolution of mri scanning and probing is increasing as
is the computational power to make sense of the readings and test algorithms
that we discover. I already see people commenting that computer won’t be
powerful enough to test algorithms: you won’t need a silicon version of the
brain to test them. I guarantee it.

If general ai were to come into existence, it would have the ability to do any
task better than a human. Any group or organization that uses ai to perform
any task will overtake anyone who does not. It will be a ratchet effect where
each application of ai spreads across the world like a disease and never goes
away. Soon, everything is done with ai. A market economy’s decentralized
nature makes it an absolute powder-keg for ai in this respect because each
node in the market is selfish and will implement ai to gain a short term
advantage in the market — and as I’ve said once one node does it all nodes
will do it. This behaviour historically has fueled the success of markets but
as we have seen with global warming does not always work.

The key here is the fact that the only reason human life has value is because
humans offer an extremely vital and valuable service that cannot be found
anywhere else. Even though this is true, most humans on this planet do not
enjoy a high quality of life. It is insane to imagine that once our only
bargaining chip is ripped from our collective hands that the number of people
with high standard of living will go up instead of down. There will be mass
unemployment. Humans will be cast aside. And that’s all assuming that robots
are never made to maliciously target human life for any reason.

People say that automation leads people to better, new jobs. In reality jobs
are not an inexhaustible resource. They just seem to be.

The only solution, in one form or another, is the prohibition of ai. I hope
that someone else reading this will agree with me or suggest another solution.
I am interested in forming some kind of group to prevent all this from
happening.

~~~
ss2003
I agree with most of what you state, but the prohibition of AI is impossible.
How could you stop nations from researching it secretly? How could you stop
the Amazons and Baidus?

~~~
nuguy
The only thing that is clear is that something must be thought of and
attempted.

------
diego_moita
I hate this Wired style of "journalism": 99% of hype, hyperbole and anecdotes
wrapped in 1% of evidence and substance.

~~~
ss2003
It is a really annoying read. If there is really something to FEP you
certainly won't find it on this article. If you can't explain something so
that a ten year old can understand it, you don't really know it. According to
this article no one really knows FEP. But the worst of it is the subjectivity
of Friston's approach. He likes routine. He gets all out of sorts if his
regular activity is disrupted. He doesn't like surprise, so he concludes the
answer to the ultimate question of life, the universe and everything is
avoiding surprise. Very self serving. Well guess what, there are plenty of
people (and other organisms too!) that like surprises! And they do quite well,
thank you. With out an appreciation and actual inclination to seek out
surprises the drive to exploration would be snuffed out. Without that drive
new habitats and opportunities are left untapped and wasted. It's fine that he
doesn't like surprises. That doesn't make it a good basis for AI consciousness
or to explain living creatures in general.

------
ColinWright
So many submissions, as yet _zero_ comments:

[https://news.ycombinator.com/item?id=18487584](https://news.ycombinator.com/item?id=18487584)

[https://news.ycombinator.com/item?id=18463384](https://news.ycombinator.com/item?id=18463384)

[https://news.ycombinator.com/item?id=18457194](https://news.ycombinator.com/item?id=18457194)

[https://news.ycombinator.com/item?id=18449205](https://news.ycombinator.com/item?id=18449205)

[https://news.ycombinator.com/item?id=18446035](https://news.ycombinator.com/item?id=18446035)

Does no one have something interesting to say or add?

~~~
platz
Take for example this paragraph:

> “This is absolutely novel in history,” Ramstead told me as we sat on a bench
> in Queen Square, surrounded by patients and staff from the surrounding
> hospitals. Before Friston came along, “We were kind of condemned to forever
> wander in this multidisciplinary space without a common currency,” he
> continued. “The free energy principle gives you that currency.”

This is bloviation and crankery. I am not the target audience for this kind of
reputation-building.

~~~
janimo
I agree the style is annoying, using many such paragraphs to meet the required
word count, but Friston's reputation is well established in scientific circles
(although I first heard of him yesterday).

