
Things everyone in ML should know about belief formation in humans [video] - scribu
https://slideslive.com/38921495/how-to-know
======
DevX101
One of the most impactful essays I read this year was this one by Kevim Simler
on how people adopt beliefs: [https://meltingasphalt.com/crony-
beliefs/](https://meltingasphalt.com/crony-beliefs/)

It's changed my beliefs about beliefs. I've always thought that people adopt
erroneous beliefs either because of logical fallacies, repeated exposure to
false information, or by listening to a gifted orator.

His argument is that people adopt beliefs because they appear to be true or
because they are useful. Beliefs can provide utility by giving you social
approval, or removing cognitive dissonance while you pay your bills.

Chances are that you and I hold false beliefs that are useful that may not be
true. If you're in the tech industry, you're probably less critical about your
company and more techno-optimistic in general in part because you gain wealth
by going along with corporate and industry propaganda.

I'm now more suspicious of any beliefs I have that are conveniently useful to
me.

~~~
jonplackett
This reminds me of a book I read a while back. I think it was called The
origin of myth or something like that.

It defined a myth as something that people know isn’t true, but choose to
believe it is, and act as if it is, because it’s beneficial to them or
society.

But then some people forget that they’re only meant to be pretending it’s true
and start to really believe it, and very quickly you’ve got religion.

~~~
AnIdiotOnTheNet
You say all that like it's always bad thing, but as Pratchett once wrote "You
need to believe in things that aren't true. How else can they become?".

~~~
Jarwain
The full quote:

"You're saying humans need... fantasies to make life bearable."

REALLY? AS IF IT WAS SOME KIND OF PINK PILL? NO. HUMANS NEED FANTASY TO BE
HUMAN. TO BE THE PLACE WHERE THE FALLING ANGEL MEETS THE RISING APE.

"Tooth fairies? Hogfathers? Little—"

YES. AS PRACTICE. YOU HAVE TO START OUT LEARNING TO BELIEVE THE LITTLE LIES.

"So we can believe the big ones?"

YES. JUSTICE. MERCY. DUTY. THAT SORT OF THING.

"They're not the same at all!"

YOU THINK SO? THEN TAKE THE UNIVERSE AND GRIND IT DOWN TO THE FINEST POWDER
AND SIEVE IT THROUGH THE FINEST SIEVE AND THEN SHOW ME ONE ATOM OF JUSTICE,
ONE MOLECULE OF MERCY. AND YET—Death waved a hand. AND YET YOU ACT AS IF THERE
IS SOME IDEAL ORDER IN THE WORLD, AS IF THERE IS SOME...SOME RIGHTNESS IN THE
UNIVERSE BY WHICH IT MAY BE JUDGED.

"Yes, but people have got to believe that, or what's the point—"

MY POINT EXACTLY.

Terry Pratchett, Hogfather

~~~
AnIdiotOnTheNet
You didn't even finish the quote to the point I was quoting. Here's the rest
of it:

She tried to assemble her thoughts.

THERE IS A PLACE WHERE TWO GALAXIES HAVE BEEN COLLIDING FOR A MILLION YEARS,
said Death, apropos of nothing.

DON’T TRY TO TELL ME THAT’S RIGHT.

“Yes, but people don’t think about that,” said Susan. “Somewhere there was a
bed…”

CORRECT. STARS EXPLODE, WORLDS COLLIDE, THERE’S HARDLY ANYWHERE IN THE
UNIVERSE WHERE HUMANS CAN LIVE WITHOUT BEING FROZEN OR FRIED, AND YET YOU
BELIEVE THAT A…A BED IS A NORMAL THING. IT IS THE MOST AMAZING TALENT.

“Talent?”

OH, YES. A VERY SPECIAL KIND OF STUPIDITY. YOU THINK THE WHOLE UNIVERSE IS
INSIDE YOUR HEADS.

“You make us sound mad,” said Susan. A nice warm bed…

NO. YOU NEED TO BELIEVE IN THINGS THAT AREN’T TRUE. HOW ELSE CAN THEY BECOME?
said Death”

~~~
mistermann
I absolutely love these kinds of subversive ideas, reminds me of:
[https://slatestarcodex.com/2014/07/30/meditations-on-
moloch/](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/)

------
d-d
I'm convinced she's right about our brains overfitting to the first belief
that has persistent feedback even in complete absence of reason.

Because I'm doing it right now.

This seems to explain a lot regarding news feeds and their cause of so much
outrage and odd beliefs. The fact that I can't even verify said outrage and
odd beliefs in the real world is the real punch to the gut, and a good reason
to get out more. And maybe unplug permanently.

------
Agebor
Great to see these views are becoming more mainstream. The talk does not
mention it, but it's likely convergent with neuroscience ideas of:

\- Bayesian Brain

\- Theory of constructed emotions
([https://www.youtube.com/watch?v=0gks6ceq4eQ](https://www.youtube.com/watch?v=0gks6ceq4eQ))

\- Free energy principle and Active Inference
([https://www.youtube.com/watch?v=Y1egnoCWgUg](https://www.youtube.com/watch?v=Y1egnoCWgUg))

A good overview is also here: [https://towardsdatascience.com/why-
intelligence-might-be-sim...](https://towardsdatascience.com/why-intelligence-
might-be-simpler-than-we-think-1d3d7feb5d34)

The issue to be reconciled is though that some of those ideas are talking
about "keeping uncertainty in the sweet spot, not too high or low" while
others about "minimising uncertainty/prediction error". I think the difference
will turn out to be only in relation to how far of a future/prediction we are
talking about. So optimising for long-term vs the short-term.

------
ci5er
I'd be interested to also see material talking about the social basis for
belief formation. We are primarily emotional and social creatures, and our
brains seem to be adapted to forming beliefs that may or may not match the
world, but certainly do match what we are socially expected to believe.

------
joe_the_user
Fascinating video,

The prefered learning process reminds me a lot of the book Flow by Mihaly
Csikszentmihalyi - find the "sweat spot" between certainty and uncertainty,
seeking contradictions to things you are learning. Etc. Indeed, a lot of the
points are talked about a lot by various "human potential" psychology systems.

Other parts of the video give me the impression that belief-formation
processes are well suited for interacting with the real world but not at all
suited for processing the stream of information available online.

~~~
CuriouslyC
You can model the optimal thing to learn mathematically, by framing learning
as question asking, and using information and decision theory.

Any time you ask a question, the answer will update your internal distribution
of potential future (or present unknown) states of the world. The information
gain of a question is the reduction in entropy of your internal future world
state distribution. You can assign a value to this information gain using a
loss function, which will tell you the expected loss by making the maximum
likelihood bet about the future state of the world given your current
knowledge of the world. The difference in expected losses is the "value" of
the information.

To bring this into the realm of the practical, if there is a payout for
knowledge, you should choose to learn things that minimize the chance that you
will make a costly bad decision, with an eye to how probable those outcomes
are. If there is no payout, you should choose to learn things that will
produce the greatest information gain, which is in areas where you are
currently very ignorant.

TLDR: Choose to learn things that will help you avoid likely catastrophes when
learning for profit, learn a little bit about a lot of very different things
when learning for fun.

~~~
joe_the_user
She goes into this a bit in the video and notes that humans form beliefs more
quickly than would be appropriate given such probability distribution model.

The thing is whether the this "optimal learning approach" is actually optimal
or not depends on the world and the actual, not hypothetical distributions of
futures (something we can't claim to have a complete model of). Human do
extremely well in situation that AI and robots don't, so I'd say the jury is a
bit out here.

~~~
inimino
> more quickly than would be appropriate given such probability distribution
> model.

As humans we have much sharper priors over hypothesis space than what we can
easily model, which probably explains this discrepancy.

~~~
joe_the_user
The video claims the explanation is different. They experiments around things
where humans have no priors and the humans still form beliefs more quickly
than is justified by a Bayesian model - and they don't necessarily formulate
correct beliefs.

The argument of the video is humans form beliefs to facilitate information
exploration. In this context, any belief can be better than none.

The impression from the discussion I get is that human beliefs and behaviors
tend to differentiate - people often have slightly different ideas about
everything, "what is a bowl" was one example. People pick-up beliefs easily
and change beliefs as they go along - as long as they have feedback.

This apparently works for groups of hunter-gathers and even for people driving
cars but less well for people using the Internet to decide whether to
vaccinate their children.

~~~
inimino
> where humans have no priors

In a Bayesian model there's no such thing as having no priors, that's the
problem with arguing against Bayesian human reasoning with a model that can't
capture the richness of human priors (which means modelling all relevant
knowledge and intuition, including innate human instincts). And human priors
include very strongly-held ones like "the world is basically comprehensible,
governed by rules that we can discover and understand." We cannot prove that,
but to the extent that we are wrong about it, all cogitation is useless, so we
assume it.

Our beliefs about "what is a bowl" include that it is an instrumental concept
created by other agents similar to us in order to facilitate communication.
This justifies very strong priors that it will be a simple concept and easy to
generalize from small numbers of examples, at least for us. All this just by
virtue of being a common word. So I don't see any way to argue that human
behaviour is non-Bayesian here unless one ignores relevant prior information
or ignores the decision theory question "what is the consequence of being
wrong about what a bowl is".

------
anotheryou
If you want a great read about how people spiral in to conspiracies:
[https://aeon.co/essays/why-its-as-hard-to-escape-an-echo-
cha...](https://aeon.co/essays/why-its-as-hard-to-escape-an-echo-chamber-as-
it-is-to-flee-a-cult)

------
geekfactor
You may also like my recent interview with her for the TWIML AI Podcast:

[https://twimlai.com/twiml-talk-330-how-to-know-with-
celeste-...](https://twimlai.com/twiml-talk-330-how-to-know-with-celeste-
kidd/)

------
enriquto
Video is not visible in my country ("because of its privacy settings"). Any
transcript/explanation available?

~~~
scribu
Someone reposted it on YouTube:
[https://youtu.be/bvebjL48f-w](https://youtu.be/bvebjL48f-w)

------
lordnacho
This is actually two talks, both worth your time. I see no mention of it in
the other comments, but the second part is about roughly what the MeToo
movement talks about. The author is personally affected, and I think it's
worth encouraging people in her situation to speak up.

