Hacker News new | past | comments | ask | show | jobs | submit login
Things everyone in ML should know about belief formation in humans [video] (slideslive.com)
236 points by scribu 25 days ago | hide | past | web | favorite | 42 comments



One of the most impactful essays I read this year was this one by Kevim Simler on how people adopt beliefs: https://meltingasphalt.com/crony-beliefs/

It's changed my beliefs about beliefs. I've always thought that people adopt erroneous beliefs either because of logical fallacies, repeated exposure to false information, or by listening to a gifted orator.

His argument is that people adopt beliefs because they appear to be true or because they are useful. Beliefs can provide utility by giving you social approval, or removing cognitive dissonance while you pay your bills.

Chances are that you and I hold false beliefs that are useful that may not be true. If you're in the tech industry, you're probably less critical about your company and more techno-optimistic in general in part because you gain wealth by going along with corporate and industry propaganda.

I'm now more suspicious of any beliefs I have that are conveniently useful to me.


I'm surprised that anyone would be surprised by this fact. Have you never thought about how, for example, ad agencies for cigarette companies felt about their work, knowing they were killing people? It seems like a fairly basic observation about human nature.


It's wasn't self-evident to me or to many millions of people. There's an enormous energy spent in attempting to convince people that they're voting for the wrong political party, that climate change is real, that God does or doesn't exist, by building logical arguments from observable facts.

I rarely see arguments of persuasion rooted in utility coming from honest, clear thinkers. Utility persuasion appears to be more common in politicians and charlatans though (Vote for me because I'll make you and your community feel good about yourself)


> convince people that ... climate change is real

Have you looked on how much easier it was to convince people that the hole on the ozone layer was real?

It was a much less observable phenomenon. It was too early on the life of the problem for many people to be affected, and it had much less visible symptoms. Yet, nearly nobody doubted it, and everybody agreed on taking action.


I think the main reason the world agreed on limiting CFCs to prevent ozone depletion was because there were technological alternatives in HFCs. So adopting the belief that CFCs are dangerous was not that costly because we could just use something else (Notably Dupont fought viciously against ozone regulations because that would have been specifically harmful to the company).

But let's imagine for a moment if there were no technological alternatives to CFCs. Where stopping ozone depletion would actually mean giving up on refrigeration and air conditioning. A lot less people would believe that ozone depletion was a real phenomena if the natural consequence of that belief would mean giving up very valuable conveniences.

Similarly, I don't think it's a coincidence that public moral attitudes about slavery changed with the advent of agricultural mechanization.


Exactly this. The hole on the ozone layer could be solved, so a huge political movement appeared, solved it, and got away.

Compared to that, the only thing one can get from the current discussions about Global Warming is a cognitive dissonance. So people don't care, or actively do not believe.

I wonder how far we could go if the political discourse changed from "how do we stop emitting GW gases?" to "how do we increase solar power 100 fold?", "how do we electrify industry and transportation?", and "how do we capture CO2 into synthetic substances?"


I think a lot of people don't realize this. That many beliefs are from culture. Which explains why you can see radical shifts in ideologies even without the "changing of the guard". Some beliefs are deeply rooted personally, but many are deeply rooted socially. But then again, it's easier to blame a single person.


The utility was obvious. All people had to do was buy a new type of hair spray that didn’t have cfc’s and they were helping to fix the planet. Sales for hairspray went up. Like the point zizek likes to make about recycling being the new system of indulgence. We can feel good about ourselves and absolve any sense of guilt about our wasteful and polluting lifestyles merely by putting cans into the right bin. Behavior is utilitarian. Belief is behavior.


Because people could emotionally connect the ozone hole to bad sunburns.

Plus, the scope was much smaller, which made it easier to move the needle.


>> convince people that ... climate change is real Have you looked on how much easier it was to convince people that the hole on the ozone layer was real?

>It was a much less observable phenomenon. It was too early on the life of the problem for many people to be affected, and it had much less visible symptoms. Yet, nearly nobody doubted it, and everybody agreed on taking action.

I don't think those two problems can be compared like this. There are a lot of differences between them. For example: - Everyone was (or was going to be) universally affected negatively by the hole in the ozone layer while there are a lot of people that stand to benefit from the results of mild global warming (for a concrete example consider opening up of the north passage between Europe and Asia along Russia's coast). - The primary mechanism for ozone layer depletion was very easy to understand - refrigerant gas destroys ozone. Regarding global warming or climate change the way we can affect it(negatively or positively) is via human greenhouse gas emissions. There are additional mechanisms that affect it, there are non human greenhouse gas emissions, there are various feedback loops and trip points only some are known about. - Also, way to remedy. Regarding ozone layer, we just had to ban some chemicals we quickly found replacements for so there was no great sacrifice. Regarding greenhouse gases many people would agree in principle we should lower greenhouse gas emissions, but how do we actually do this? Some countries could build more nuclear power plants, solar and wind are expensive and unreliable unless backed by expensive large scale infrastructure. Then we also have the matter of already emitted gases by developed countries as part of their past development and the perceived hypocrisy of denying same technological advance to developing countries. This leads to ideas that are very difficult to reconcile. For example if there was some kind of global accounting for greenhouse gas emissions that took into account past emissions resulting in a system where developed countries would have to effectively purchase "carbon credits" from developing countries how do we account for underprivileged areas of so called developed countries. No matter how we attempt to resolve this people will consider it unfair and will vote in politicians that will revert it.

Please note I didn't even mention the issue of non-compliance from countries such as China (or US).

This is why I think there will be essentially nothing done about greenhouse gas emissions on global scale and we should plan for how to deal with the consequences of the change. There is also the risk of well-meaning climate-geoengineering attempts fucking it up really badly, but such attempts shouldn't be that difficult to prevent for state players.


> I'm surprised that anyone would be surprised by this fact.

Here is an idea I've been thinking about a lot lately, that I proclaim can easily be observed if one is willing to look (and, is supported to some degree by the the ~fact that it can also be easily be observed that people appear (keeping in mind that absence of evidence is not evidence of absence) to be not willing to look):

>> "Chances are that you and I hold false beliefs that are useful that may not be true."

Most people will have no problem accepting this idea - not only that it applies to people in general, but also that it applies to themselves (see: this thread).....provided the topic of conversation is the abstract phenomenon itself. However, if the topic of discussion is something else, at the object level (particularly Culture War topics), the ability to even acknowledge (within forum conversation) this phenomenon, let alone admit that it may apply to oneself, seems to (based on my experimentation thus far) completely vanish, and almost without exception.

It is also my perception that this theory seems to be particularly unpopular in high-intelligence/rationalist communities, which makes it even more interesting.

It's far too early to form any strong conclusions about this one way or another, but my intuition tells me there's something interesting and important going on here.

It seems unlikely that this is a novel idea, has anyone encountered it elsewhere? Is there a name for it?


This reminds me of a book I read a while back. I think it was called The origin of myth or something like that.

It defined a myth as something that people know isn’t true, but choose to believe it is, and act as if it is, because it’s beneficial to them or society.

But then some people forget that they’re only meant to be pretending it’s true and start to really believe it, and very quickly you’ve got religion.


Sounds very similar to Yuval Noah Harari's books. A big part of his thesis is that humans became the dominant species of humanoids because of our ability to work collectively in massive social groups by coordinating our behavior with stories that aren't objectively true.


I would not be surprised if both these books are referring to the same thing, from similar sources just using different ways to describe the same concept.

I have watched a talk by Yuval Noah Harari, and read a few blog posts about his idea of “fictions”. I have not read the book so I stand to be corrected.

comments below are related to “fiction”. This is a foundation of his thesis.

The concept of “fiction” as described in the Talk(and book?) appears to have a long history, but called by different names.

I have traced some of this thinking to early 1900’s, which itself appears to based on work from the 1700’s, and maybe even further back. It seems to be a common idea made palatable for mass consumption.


You say all that like it's always bad thing, but as Pratchett once wrote "You need to believe in things that aren't true. How else can they become?".


The full quote:

"You're saying humans need... fantasies to make life bearable."

REALLY? AS IF IT WAS SOME KIND OF PINK PILL? NO. HUMANS NEED FANTASY TO BE HUMAN. TO BE THE PLACE WHERE THE FALLING ANGEL MEETS THE RISING APE.

"Tooth fairies? Hogfathers? Little—"

YES. AS PRACTICE. YOU HAVE TO START OUT LEARNING TO BELIEVE THE LITTLE LIES.

"So we can believe the big ones?"

YES. JUSTICE. MERCY. DUTY. THAT SORT OF THING.

"They're not the same at all!"

YOU THINK SO? THEN TAKE THE UNIVERSE AND GRIND IT DOWN TO THE FINEST POWDER AND SIEVE IT THROUGH THE FINEST SIEVE AND THEN SHOW ME ONE ATOM OF JUSTICE, ONE MOLECULE OF MERCY. AND YET—Death waved a hand. AND YET YOU ACT AS IF THERE IS SOME IDEAL ORDER IN THE WORLD, AS IF THERE IS SOME...SOME RIGHTNESS IN THE UNIVERSE BY WHICH IT MAY BE JUDGED.

"Yes, but people have got to believe that, or what's the point—"

MY POINT EXACTLY.

Terry Pratchett, Hogfather


You didn't even finish the quote to the point I was quoting. Here's the rest of it:

She tried to assemble her thoughts.

THERE IS A PLACE WHERE TWO GALAXIES HAVE BEEN COLLIDING FOR A MILLION YEARS, said Death, apropos of nothing.

DON’T TRY TO TELL ME THAT’S RIGHT.

“Yes, but people don’t think about that,” said Susan. “Somewhere there was a bed…”

CORRECT. STARS EXPLODE, WORLDS COLLIDE, THERE’S HARDLY ANYWHERE IN THE UNIVERSE WHERE HUMANS CAN LIVE WITHOUT BEING FROZEN OR FRIED, AND YET YOU BELIEVE THAT A…A BED IS A NORMAL THING. IT IS THE MOST AMAZING TALENT.

“Talent?”

OH, YES. A VERY SPECIAL KIND OF STUPIDITY. YOU THINK THE WHOLE UNIVERSE IS INSIDE YOUR HEADS.

“You make us sound mad,” said Susan. A nice warm bed…

NO. YOU NEED TO BELIEVE IN THINGS THAT AREN’T TRUE. HOW ELSE CAN THEY BECOME? said Death”


I absolutely love these kinds of subversive ideas, reminds me of: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/


Isn't this conflating a belief with imagination though?

Creating and inventing anything (societies, art, science, technology, relationships, whatever) obviously do require belief that they will work or are possible before they can be created.

But believing the earth was created in 7 days by a magical being will not make it so no matter how many people believe it.


Likewise, believing all Christians take scripture literally will not make it so no matter how many people believe it.


> and very quickly you’ve got religion

Or illusions of free will, rationality, etc


Free will is the fundamental (IMHO incorrect) pillar of basically all religion and society.

But life would be pretty hard to live if you didn't believe in it just a little bit.


Sam Harris and many other atheists are hardcore believers.

Similarly, many hardcore atheists have deep faith in the rationality of their beliefs and perception of reality.


Simler also co-wrote a book with economist Robin Hanson that delves further into the phenomenon of hidden motives and the social benefit of beliefs: https://elephantinthebrain.com


I'm not sure who originally said it but I think it makes a lot of sense to think about belief formation this way:

The Cerebrum serves to please the Limbic System

That is, all of our supposed high minded reasoning and long term planning are really just adding complex explanations to what is basically pre-mammalian desires and impulses.


I'm convinced she's right about our brains overfitting to the first belief that has persistent feedback even in complete absence of reason.

Because I'm doing it right now.

This seems to explain a lot regarding news feeds and their cause of so much outrage and odd beliefs. The fact that I can't even verify said outrage and odd beliefs in the real world is the real punch to the gut, and a good reason to get out more. And maybe unplug permanently.


Great to see these views are becoming more mainstream. The talk does not mention it, but it's likely convergent with neuroscience ideas of:

- Bayesian Brain

- Theory of constructed emotions (https://www.youtube.com/watch?v=0gks6ceq4eQ)

- Free energy principle and Active Inference (https://www.youtube.com/watch?v=Y1egnoCWgUg)

A good overview is also here: https://towardsdatascience.com/why-intelligence-might-be-sim...

The issue to be reconciled is though that some of those ideas are talking about "keeping uncertainty in the sweet spot, not too high or low" while others about "minimising uncertainty/prediction error". I think the difference will turn out to be only in relation to how far of a future/prediction we are talking about. So optimising for long-term vs the short-term.


I'd be interested to also see material talking about the social basis for belief formation. We are primarily emotional and social creatures, and our brains seem to be adapted to forming beliefs that may or may not match the world, but certainly do match what we are socially expected to believe.


Fascinating video,

The prefered learning process reminds me a lot of the book Flow by Mihaly Csikszentmihalyi - find the "sweat spot" between certainty and uncertainty, seeking contradictions to things you are learning. Etc. Indeed, a lot of the points are talked about a lot by various "human potential" psychology systems.

Other parts of the video give me the impression that belief-formation processes are well suited for interacting with the real world but not at all suited for processing the stream of information available online.


You can model the optimal thing to learn mathematically, by framing learning as question asking, and using information and decision theory.

Any time you ask a question, the answer will update your internal distribution of potential future (or present unknown) states of the world. The information gain of a question is the reduction in entropy of your internal future world state distribution. You can assign a value to this information gain using a loss function, which will tell you the expected loss by making the maximum likelihood bet about the future state of the world given your current knowledge of the world. The difference in expected losses is the "value" of the information.

To bring this into the realm of the practical, if there is a payout for knowledge, you should choose to learn things that minimize the chance that you will make a costly bad decision, with an eye to how probable those outcomes are. If there is no payout, you should choose to learn things that will produce the greatest information gain, which is in areas where you are currently very ignorant.

TLDR: Choose to learn things that will help you avoid likely catastrophes when learning for profit, learn a little bit about a lot of very different things when learning for fun.


She goes into this a bit in the video and notes that humans form beliefs more quickly than would be appropriate given such probability distribution model.

The thing is whether the this "optimal learning approach" is actually optimal or not depends on the world and the actual, not hypothetical distributions of futures (something we can't claim to have a complete model of). Human do extremely well in situation that AI and robots don't, so I'd say the jury is a bit out here.


> more quickly than would be appropriate given such probability distribution model.

As humans we have much sharper priors over hypothesis space than what we can easily model, which probably explains this discrepancy.


Well, our priors aren't easily expressible in terms of simple probability distributions. They could probably be modeled fairly well by gaussian process priors though.


The video claims the explanation is different. They experiments around things where humans have no priors and the humans still form beliefs more quickly than is justified by a Bayesian model - and they don't necessarily formulate correct beliefs.

The argument of the video is humans form beliefs to facilitate information exploration. In this context, any belief can be better than none.

The impression from the discussion I get is that human beliefs and behaviors tend to differentiate - people often have slightly different ideas about everything, "what is a bowl" was one example. People pick-up beliefs easily and change beliefs as they go along - as long as they have feedback.

This apparently works for groups of hunter-gathers and even for people driving cars but less well for people using the Internet to decide whether to vaccinate their children.


> where humans have no priors

In a Bayesian model there's no such thing as having no priors, that's the problem with arguing against Bayesian human reasoning with a model that can't capture the richness of human priors (which means modelling all relevant knowledge and intuition, including innate human instincts). And human priors include very strongly-held ones like "the world is basically comprehensible, governed by rules that we can discover and understand." We cannot prove that, but to the extent that we are wrong about it, all cogitation is useless, so we assume it.

Our beliefs about "what is a bowl" include that it is an instrumental concept created by other agents similar to us in order to facilitate communication. This justifies very strong priors that it will be a simple concept and easy to generalize from small numbers of examples, at least for us. All this just by virtue of being a common word. So I don't see any way to argue that human behaviour is non-Bayesian here unless one ignores relevant prior information or ignores the decision theory question "what is the consequence of being wrong about what a bowl is".


Is there a reference or book you can point to re: decision and information theory?


"Information theory, inference and learning algorithms" is a good (and free) introduction.


If you want a great read about how people spiral in to conspiracies: https://aeon.co/essays/why-its-as-hard-to-escape-an-echo-cha...


You may also like my recent interview with her for the TWIML AI Podcast:

https://twimlai.com/twiml-talk-330-how-to-know-with-celeste-...


Video is not visible in my country ("because of its privacy settings"). Any transcript/explanation available?


Someone reposted it on YouTube: https://youtu.be/bvebjL48f-w


This is actually two talks, both worth your time. I see no mention of it in the other comments, but the second part is about roughly what the MeToo movement talks about. The author is personally affected, and I think it's worth encouraging people in her situation to speak up.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: