
Our Brain Uses Statistics to Calculate Confidence, Make Decisions - brahmwg
http://www.neuroscientistnews.com/research-news/our-brain-uses-statistics-calculate-confidence-make-decisions
======
haberman
> It is Kepecs' thesis that statistics - generated by the objective processing
> of sensory and other data - is the ultimate language of the brain.

Every time I read stuff like this, I lose more confidence that the scientific
method as currently applied, funded, and published is capable of truly
conquering complex systems like the brain or nutrition.

I read article after article like this one, where a prominent researcher
conducts some experiment that shows some result. But listening to that person
talk about their work, it becomes clear that this has become their pet theory
that they want to be capable of explaining _everything_. As if everything we
know about some system will someday be shown to be reducible to one clean,
beautiful idea.

Here are a few quick reasons why I find this idea to be highly implausible:

\- There are a dizzying number of documented cognitive biases in humans. If
statistics is the ultimate language of the brain, our software is pretty
thoroughly buggy:
[https://en.wikipedia.org/wiki/List_of_cognitive_biases](https://en.wikipedia.org/wiki/List_of_cognitive_biases)

\- Belief motivates people to do things that have absolutely no statistical
justification. For example, the time that the prophecy of a teenage girl
convinced a tribe of people in present-day South Africa to kill 300,000 -
400,000 of their own cattle, leading to the starvation and death of 20,000 -
40,000 people. How does a brain built on statistics decide that this is the
most rational course of action?
[http://persistentfrontiers.com/xhosacattlekilling/](http://persistentfrontiers.com/xhosacattlekilling/)

EDIT: Several people are replying with some form of: "your examples can still
show that the mind is inherently built on statistics if you think about it in
way X." Let's do a quick experiment to see if this theory is actually
falsifiable. What experiment/result would convince you that statistics is
_not_ "the ultimate language of the brain?"

~~~
jlgray
To quote wikipedia's article on statistics: _Statistics is the study of the
collection, analysis, interpretation, presentation, and organization of data._

Isn't that more or less what human brains do?

A neural network is just a curve fitting algorithm optimized to fit a bunch of
data, that is, a statistical method. NNs are, to the best of my knowledge, a
fairly good analogy for what the physical computation process of the brain.

NNs are also very easy to fool. Mislabel the humans in your robot training
data as 'kill', and your network will make bad decisions. Garbage in, garbage
out.

If you make a probabilistic model and attach a very high confidence to the
things that spirits tell little girls down by the river, then sometimes your
model will suggest some terrible ideas. Humans are also notoriously bad at
sampling data, but that doesn't mean it isn't what they're doing.

Your edit raises a good question, though! I can't say I have a good answer,
but would be interested to hear a good idea for such an experiment.

I want to use the phenomenon the inner monologue and the act of convincing
oneself of something as a counter example, but on the other hand, language is
very easily generated by statistical methods, and the brain apparently will
make decisions and then come up with justifications afterward.
[http://pss.sagepub.com/content/early/2016/04/27/095679761664...](http://pss.sagepub.com/content/early/2016/04/27/0956797616641943.abstract)

~~~
twinkletwinkle
>Isn't that more or less what human brains do?

Sure but now what are you demonstrating. "The human brain collects and
interprets data". Great paper, thanks for the contribution to science.

~~~
webaba
Love it

------
intrasight
Slime molds also use statistics to calculate confidence, make decisions:

[http://phys.org/news/2014-01-slime-
molds.html](http://phys.org/news/2014-01-slime-molds.html)

"Looking toward the future, Adamatzky is designing a hybrid device that
combines a slime mold with conventional electronic computers. A year ago he
received a $2.1 million GBP by the European Commission to build such a
computer. Now with funding from the EU Unconventional Computation program, he
is building a slime mold computer named PhyChip
([http://www.phychip.eu/)"](http://www.phychip.eu/\)")

------
fudged71
I'm half colorblind. I often tell people that I don't see colors, I see
probabilities. If you ask me what color something is, I am going to make a
guess and there is a probability that I am right or wrong based on many
factors.

------
kingkawn
I think it's like a series of aqueducts myself.

It seems obvious that the brain will resemble whatever we've got as the most
complex modeling system at any given time while never being accurately
represented by any of them.

~~~
visarga
But we can still benchmark the performance of the latest model and compare it
to human average. There are some tasks where humans have been bested.

------
kensai
"The work may also have wider implications. The fields of statistics and, in
particular, machine learning, may have something to learn from this inner
statistician. "Humans are still better than computers at solving really
difficult problems," says Kepecs."

Intuitive as Statistics is simply one of the many tools we humans use to
tackle difficult problems, but definitely not the only one.

~~~
rudolf0
But it is a little disheartening (yet also exciting) to think that when an AI
is eventually developed with a human's intuition, creativity, and learning
ability, we'll essentially have no advantages over them.

~~~
coldtea
It might also be a pipe dream.

We might have a global maximum of tech we can achieve with our brain capacity
that does not including building brain-level intelligence. Not because it's
some special taboo/limit, maybe just because it's too high a barrier.

(Or e.g. at best we could replicate it with organic matter, e.g. grow brain
cells, tissues etc, but then it won't be as potentially speedy/efficient as
computer AI either).

~~~
prodmerc
I think that's why the focus is on deep learning and self improvement for AIs.

We may not be able to create an AI that's as intelligent as us, but we can
create a rudimentary AI that can then make itself smarter as time goes by (and
since it can be practically immortal it would have a lot of time to learn) and
which can be easily duplicated later.

Downside is we probably won't know what it's actually thinking and what it
will do...

------
dschiptsov
Our brain, presumably, uses much simpler, primitive thresholds, like dynamic
weights in a weighted sum.

As one might deduce from the facts of brain's physiology and development,
especially prooning of neurons in the processe of maturation, the key is "a
trained structure" to which a weighted graph is, perhaps, the most adequate
model, which is shaped by highly specialized centers in the process of
continuous training. At least all pattern recognition, including language and
vision, presumably implemented this way.

The principle is of combination by trial and error of simplest, smallest,
stable, reusable building blocks - specialized cells.

Three is no general purpose computer in a brain. It is a network of structured
layers of microcontrollers.)

------
danharaj
It's interesting how the reification of statistical reasoning, that is,
understanding statistics symbolically and applying it to abstract symbols is
far more fraught than the implicit statistical reasoning that this article
describes.

~~~
stormbrew
Your brain is also able to calculate the parabolic arc of a baseball someone
threw to you whether or not you could ever do it on paper.

~~~
IanCal
That's not quite true. When catching a ball we constantly adjust as we go so
we're not perfectly calculating where it will land. We can train humans to
roughly approximate where a ball will land, that's about it.

~~~
stormbrew
This is just a matter of precision. The fact that you can probably start
adjusting in a meaningful and probably correct way while the ball is still
going up is significant. Somehow that person's brain is doing a thing they may
not have the symbolic knowledge to describe.

~~~
IanCal
> Somehow that person's brain is doing a thing they may not have the symbolic
> knowledge to describe.

And we can draw roughly correct circles but most people don't know the
equation for one. I'm not sure that's particularly significant though, but I
may have missed your point.

~~~
Houshalter
No the ability to draw circles is impressive. The amount of math and
calculations being done in the brain to perform such a feat is incredible.
Programming robots to do even a limited range of the things humans do, is
nearly impossible.

------
lostmsu
As shown by haberman's comment and a reply by yummyfajitas below, our brain
clearly does not operate on pure statistics, as most people are subject to
[https://en.wikipedia.org/wiki/Conjunction_fallacy](https://en.wikipedia.org/wiki/Conjunction_fallacy)

------
Lxr
I think the crowd that maintains statistical machine learning is not 'real' AI
should read this study.

------
known
Human brain is vulnerable to
[https://en.wikipedia.org/wiki/List_of_fallacies](https://en.wikipedia.org/wiki/List_of_fallacies)

------
miguelrochefort
In other news, the sky is blue.

I'm shocked that this is not obvious to everyone.

------
excel2flow
Can prejudice be renamed to "statistical assessments"? :)

------
le0n
I wonder whether this is the same sense in which the brain also uses logic to
reason about stuff, and physics to interact with the world.

------
foxhop
I would say, our brains use quick estimations of statistics to calculate
confidence and rationalize decisions.

------
eli_gottlieb
Well, that's another nail in the coffin for all nonprobabilistic views of
cognition.

------
Hernanpm
That is why I agree with Arthur Benjamin with teach statistics at school
level.

------
JI_Sanders
Ok guys, time to weigh in. Of course we aren't trying to claim that _all_
brain processing reduces to Bayes. Our claim is that we experience the
statistical likelihood of our beliefs about our input data (from sense organs
or from long-term memory retrieval) as a sense of confidence. If that sense is
actually generated by a heuristic computation rather than a neural
implementation of Bayes rule is almost irrelevant, because the end product
looks identical to Bayes in the ways that matter for use of a confidence
variable in computation. We work through the mathematics of how statistical
confidence should appear in different projections of decision data here:

[http://www.biorxiv.org/content/early/2016/01/01/017400.abstr...](http://www.biorxiv.org/content/early/2016/01/01/017400.abstract)

(To appear in
[http://www.mitpressjournals.org/loi/neco](http://www.mitpressjournals.org/loi/neco)
in the next month or so)

By necessity, we had to study decisions that could be replicated in a lab, so
no mass-killings of cows. That being said, the brain is constantly faced with
a statistical analysis problem - to figure out what's really happening outside
of itself. A Bayesian machine is much more useful to assign a classification
to sense data and compute its likelihood, than to determine whether your
elders are correct when they insist that God is going to cause a plague in 2
years if we don't burn some bovines. Point being, the brain can construct and
switch between (occasionally hilarious) decision making strategies depending
on context - especially in the murkier, information-impoverished realm of
social decision making.

Dealing with sense and memory data is a problem common to all animals, and
evolution has had millions of generations to reject suboptimal processors.
Processing language and abstract symbols to reinforce tribal social cohesion
or implement a political agenda is a different category of decision that is
newer in evolutionary terms, and definitely buggier. I hope this helps clear
what we are and aren't claiming!

As an aside (since this is the right crowd!) we open-sourced the circuit
design files, firmware and APIs for the real-time stimulus generator we used
here:

[https://github.com/sanworks/PulsePal](https://github.com/sanworks/PulsePal)

with documentation here:

[https://sites.google.com/site/pulsepalwiki/home](https://sites.google.com/site/pulsepalwiki/home)

and here:

[http://journal.frontiersin.org/article/10.3389/fneng.2014.00...](http://journal.frontiersin.org/article/10.3389/fneng.2014.00043/full)

This system allowed us to shut off the ongoing audio click streams within 100
microseconds of a decision button press - so we could look backwards in time
at the precise evidence stream used to make each choice and confidence report,
for clues about what algorithm the brain used to process the streams. The
follow-up study (under review) uses this data to evaluate different models of
how decision and confidence computations are implemented in real-time.

