It's changed my beliefs about beliefs. I've always thought that people adopt erroneous beliefs either because of logical fallacies, repeated exposure to false information, or by listening to a gifted orator.
His argument is that people adopt beliefs because they appear to be true or because they are useful. Beliefs can provide utility by giving you social approval, or removing cognitive dissonance while you pay your bills.
Chances are that you and I hold false beliefs that are useful that may not be true. If you're in the tech industry, you're probably less critical about your company and more techno-optimistic in general in part because you gain wealth by going along with corporate and industry propaganda.
I'm now more suspicious of any beliefs I have that are conveniently useful to me.
I rarely see arguments of persuasion rooted in utility coming from honest, clear thinkers. Utility persuasion appears to be more common in politicians and charlatans though (Vote for me because I'll make you and your community feel good about yourself)
Have you looked on how much easier it was to convince people that the hole on the ozone layer was real?
It was a much less observable phenomenon. It was too early on the life of the problem for many people to be affected, and it had much less visible symptoms. Yet, nearly nobody doubted it, and everybody agreed on taking action.
But let's imagine for a moment if there were no technological alternatives to CFCs. Where stopping ozone depletion would actually mean giving up on refrigeration and air conditioning. A lot less people would believe that ozone depletion was a real phenomena if the natural consequence of that belief would mean giving up very valuable conveniences.
Similarly, I don't think it's a coincidence that public moral attitudes about slavery changed with the advent of agricultural mechanization.
Compared to that, the only thing one can get from the current discussions about Global Warming is a cognitive dissonance. So people don't care, or actively do not believe.
I wonder how far we could go if the political discourse changed from "how do we stop emitting GW gases?" to "how do we increase solar power 100 fold?", "how do we electrify industry and transportation?", and "how do we capture CO2 into synthetic substances?"
Plus, the scope was much smaller, which made it easier to move the needle.
>It was a much less observable phenomenon. It was too early on the life of the problem for many people to be affected, and it had much less visible symptoms. Yet, nearly nobody doubted it, and everybody agreed on taking action.
I don't think those two problems can be compared like this. There are a lot of differences between them. For example:
- Everyone was (or was going to be) universally affected negatively by the hole in the ozone layer while there are a lot of people that stand to benefit from the results of mild global warming (for a concrete example consider opening up of the north passage between Europe and Asia along Russia's coast).
- The primary mechanism for ozone layer depletion was very easy to understand - refrigerant gas destroys ozone. Regarding global warming or climate change the way we can affect it(negatively or positively) is via human greenhouse gas emissions. There are additional mechanisms that affect it, there are non human greenhouse gas emissions, there are various feedback loops and trip points only some are known about.
- Also, way to remedy. Regarding ozone layer, we just had to ban some chemicals we quickly found replacements for so there was no great sacrifice. Regarding greenhouse gases many people would agree in principle we should lower greenhouse gas emissions, but how do we actually do this? Some countries could build more nuclear power plants, solar and wind are expensive and unreliable unless backed by expensive large scale infrastructure. Then we also have the matter of already emitted gases by developed countries as part of their past development and the perceived hypocrisy of denying same technological advance to developing countries. This leads to ideas that are very difficult to reconcile. For example if there was some kind of global accounting for greenhouse gas emissions that took into account past emissions resulting in a system where developed countries would have to effectively purchase "carbon credits" from developing countries how do we account for underprivileged areas of so called developed countries. No matter how we attempt to resolve this people will consider it unfair and will vote in politicians that will revert it.
Please note I didn't even mention the issue of non-compliance from countries such as China (or US).
This is why I think there will be essentially nothing done about greenhouse gas emissions on global scale and we should plan for how to deal with the consequences of the change. There is also the risk of well-meaning climate-geoengineering attempts fucking it up really badly, but such attempts shouldn't be that difficult to prevent for state players.
Here is an idea I've been thinking about a lot lately, that I proclaim can easily be observed if one is willing to look (and, is supported to some degree by the the ~fact that it can also be easily be observed that people appear (keeping in mind that absence of evidence is not evidence of absence) to be not willing to look):
>> "Chances are that you and I hold false beliefs that are useful that may not be true."
Most people will have no problem accepting this idea - not only that it applies to people in general, but also that it applies to themselves (see: this thread).....provided the topic of conversation is the abstract phenomenon itself. However, if the topic of discussion is something else, at the object level (particularly Culture War topics), the ability to even acknowledge (within forum conversation) this phenomenon, let alone admit that it may apply to oneself, seems to (based on my experimentation thus far) completely vanish, and almost without exception.
It is also my perception that this theory seems to be particularly unpopular in high-intelligence/rationalist communities, which makes it even more interesting.
It's far too early to form any strong conclusions about this one way or another, but my intuition tells me there's something interesting and important going on here.
It seems unlikely that this is a novel idea, has anyone encountered it elsewhere? Is there a name for it?
It defined a myth as something that people know isn’t true, but choose to believe it is, and act as if it is, because it’s beneficial to them or society.
But then some people forget that they’re only meant to be pretending it’s true and start to really believe it, and very quickly you’ve got religion.
I have watched a talk by Yuval Noah Harari, and read a few blog posts about his idea of “fictions”. I have not read the book so I stand to be corrected.
comments below are related to “fiction”. This is a foundation of his thesis.
The concept of “fiction” as described in the
Talk(and book?) appears to have a long history, but called by different names.
I have traced some of this thinking to early 1900’s, which itself appears to based on work from the 1700’s, and maybe even further back. It seems to be a common idea made palatable for mass consumption.
"You're saying humans need... fantasies to make life bearable."
REALLY? AS IF IT WAS SOME KIND OF PINK PILL? NO. HUMANS NEED FANTASY TO BE HUMAN. TO BE THE PLACE WHERE THE FALLING ANGEL MEETS THE RISING APE.
"Tooth fairies? Hogfathers? Little—"
YES. AS PRACTICE. YOU HAVE TO START OUT LEARNING TO BELIEVE THE LITTLE LIES.
"So we can believe the big ones?"
YES. JUSTICE. MERCY. DUTY. THAT SORT OF THING.
"They're not the same at all!"
YOU THINK SO? THEN TAKE THE UNIVERSE AND GRIND IT DOWN TO THE FINEST POWDER AND SIEVE IT THROUGH THE FINEST SIEVE AND THEN SHOW ME ONE ATOM OF JUSTICE, ONE MOLECULE OF MERCY. AND YET—Death waved a hand. AND YET YOU ACT AS IF THERE IS SOME IDEAL ORDER IN THE WORLD, AS IF THERE IS SOME...SOME RIGHTNESS IN THE UNIVERSE BY WHICH IT MAY BE JUDGED.
"Yes, but people have got to believe that, or what's the point—"
MY POINT EXACTLY.
Terry Pratchett, Hogfather
She tried to assemble her thoughts.
THERE IS A PLACE WHERE TWO GALAXIES HAVE BEEN COLLIDING FOR A MILLION YEARS, said Death, apropos of nothing.
DON’T TRY TO TELL ME THAT’S RIGHT.
“Yes, but people don’t think about that,” said Susan.
“Somewhere there was a bed…”
CORRECT. STARS EXPLODE, WORLDS COLLIDE, THERE’S HARDLY ANYWHERE IN THE UNIVERSE WHERE HUMANS CAN LIVE WITHOUT BEING FROZEN OR FRIED, AND YET YOU BELIEVE THAT A…A BED IS A NORMAL THING. IT IS THE MOST AMAZING TALENT.
OH, YES. A VERY SPECIAL KIND OF STUPIDITY. YOU THINK THE WHOLE UNIVERSE IS INSIDE YOUR HEADS.
“You make us sound mad,” said Susan. A nice warm bed…
NO. YOU NEED TO BELIEVE IN THINGS THAT AREN’T TRUE. HOW ELSE CAN THEY BECOME? said Death”
Creating and inventing anything (societies, art, science, technology, relationships, whatever) obviously do require belief that they will work or are possible before they can be created.
But believing the earth was created in 7 days by a magical being will not make it so no matter how many people believe it.
Or illusions of free will, rationality, etc
But life would be pretty hard to live if you didn't believe in it just a little bit.
Similarly, many hardcore atheists have deep faith in the rationality of their beliefs and perception of reality.
The Cerebrum serves to please the Limbic System
That is, all of our supposed high minded reasoning and long term planning are really just adding complex explanations to what is basically pre-mammalian desires and impulses.
Because I'm doing it right now.
This seems to explain a lot regarding news feeds and their cause of so much outrage and odd beliefs. The fact that I can't even verify said outrage and odd beliefs in the real world is the real punch to the gut, and a good reason to get out more. And maybe unplug permanently.
- Bayesian Brain
- Theory of constructed emotions (https://www.youtube.com/watch?v=0gks6ceq4eQ)
- Free energy principle and Active Inference (https://www.youtube.com/watch?v=Y1egnoCWgUg)
A good overview is also here:
The issue to be reconciled is though that some of those ideas are talking about "keeping uncertainty in the sweet spot, not too high or low" while others about "minimising uncertainty/prediction error". I think the difference will turn out to be only in relation to how far of a future/prediction we are talking about. So optimising for long-term vs the short-term.
The prefered learning process reminds me a lot of the book Flow by Mihaly Csikszentmihalyi - find the "sweat spot" between certainty and uncertainty, seeking contradictions to things you are learning. Etc. Indeed, a lot of the points are talked about a lot by various "human potential" psychology systems.
Other parts of the video give me the impression that belief-formation processes are well suited for interacting with the real world but not at all suited for processing the stream of information available online.
Any time you ask a question, the answer will update your internal distribution of potential future (or present unknown) states of the world. The information gain of a question is the reduction in entropy of your internal future world state distribution. You can assign a value to this information gain using a loss function, which will tell you the expected loss by making the maximum likelihood bet about the future state of the world given your current knowledge of the world. The difference in expected losses is the "value" of the information.
To bring this into the realm of the practical, if there is a payout for knowledge, you should choose to learn things that minimize the chance that you will make a costly bad decision, with an eye to how probable those outcomes are. If there is no payout, you should choose to learn things that will produce the greatest information gain, which is in areas where you are currently very ignorant.
TLDR: Choose to learn things that will help you avoid likely catastrophes when learning for profit, learn a little bit about a lot of very different things when learning for fun.
The thing is whether the this "optimal learning approach" is actually optimal or not depends on the world and the actual, not hypothetical distributions of futures (something we can't claim to have a complete model of). Human do extremely well in situation that AI and robots don't, so I'd say the jury is a bit out here.
As humans we have much sharper priors over hypothesis space than what we can easily model, which probably explains this discrepancy.
The argument of the video is humans form beliefs to facilitate information exploration. In this context, any belief can be better than none.
The impression from the discussion I get is that human beliefs and behaviors tend to differentiate - people often have slightly different ideas about everything, "what is a bowl" was one example. People pick-up beliefs easily and change beliefs as they go along - as long as they have feedback.
This apparently works for groups of hunter-gathers and even for people driving cars but less well for people using the Internet to decide whether to vaccinate their children.
In a Bayesian model there's no such thing as having no priors, that's the problem with arguing against Bayesian human reasoning with a model that can't capture the richness of human priors (which means modelling all relevant knowledge and intuition, including innate human instincts). And human priors include very strongly-held ones like "the world is basically comprehensible, governed by rules that we can discover and understand." We cannot prove that, but to the extent that we are wrong about it, all cogitation is useless, so we assume it.
Our beliefs about "what is a bowl" include that it is an instrumental concept created by other agents similar to us in order to facilitate communication. This justifies very strong priors that it will be a simple concept and easy to generalize from small numbers of examples, at least for us. All this just by virtue of being a common word. So I don't see any way to argue that human behaviour is non-Bayesian here unless one ignores relevant prior information or ignores the decision theory question "what is the consequence of being wrong about what a bowl is".