Hacker News new | past | comments | ask | show | jobs | submit login
Doubt explanations (sive.rs)
52 points by marban 3 days ago | hide | past | favorite | 43 comments





I don't know about these study conclusions; the secret directions seem more like advertisements to me: Someone whispers "Please walk" and my brain (consciously or otherwise) thinks about it and realizes I'd like a drink, just like if I drive past a McDonald's billboard and think, "Hmm, french fries would hit the spot right now." Someone flashes "close the window" and it draws attention to the fact that actually, I am kind of chilly and would like the window to be closed. It doesn't mean the desire was made up, just not considered until it got the spotlight.

> your subconscious invents an explanation that you think is a fact.

That may be the case for some people, but it isn't the typical case for me. Quite the opposite. My subconscious drives the show and after the fact if someone asks me about something or if I'm forced to confront why I did things in the very unlikely way I did them my conscious mind tries to create an explanation for the unconscious actions I've already taken.


I agree. I heard a researcher put it like "we often think of the conscious mind as being the guy in the driver's seat, making decisions, and guiding behavior; but really *most of our decision making and behavior is guided by subconscious processes and the conscious mind is more like the PR person who comes up with stories that justify our behavior."

Well, I don't agree with that fully either.

For me, if there is a moral quandary or a highly analytical situation where the subconscious can't quite estimate it then it gets kicked up to the conscious layer for a real, expensive (computationally speaking) decision.

Something like "do I really want to flirt with this woman, given that I'm married" or "wait a second the abstraction here is leaking across domains which may ruin the architecture of this program in the long run" get kicked up to the upper layer. But for most of my programming, I honestly just keep it at the subconscious layer and so long as there is music that the conscious mind can check out on we're good.


The entire point is that your plausibility generator generates things that seem plausible to you. Disagreeing with this basic human brain function is at least ironic in context.

It's a funny bit of musefulness, but again I disagree on terms. The subconcious layer of my mind does not merely generate plausibilities. That is part of it, yes, especially at the bounds of my senses colours will shade in that aren't there or shapes will look like people or characters. This is especially prevelent when I'm learning something new. For example, I'm learning how to read and write Traditional Chinese right now and when I move my eyes fast symbols will briefly look like Chinese ones that I know, even if they're actually Japanese characters that closely resemble them.

But the subconcious also handles two other major functions that are not involved in plausibility generation.

1. Prediction. Not just "predict something plausible" but "predict the time down to the second" or "predict what people are about to say on this podcast I'm listening to" and when it gets it uncannily right or surprisingly wrong it generates laughter of some kind.

2. Tacit learning. The type you only get by bathing in a topic for a good long time so that connections can form that are below a linguistic layer and to surface them requires abstract philosophical pondering after the fact.

There are other functions (snap judgements, feeling generation to guide strategic choices, creativity[0], and um, "connection"[1]) but those three together I would say comprise the bulk of the subconcious experience from my perspective.

[0] Some disagree that this subconcious. All I can say is that they're 100% wrong for me.

[1] Something I wish I had a word for but it's essentially the feeling and thinking generated by joining yourself with a piece of technology like a bike or a computer or a car. Its as if ones exoskeleton is donned and the neural pathways update to take it on.


I'm a little doubtful about applying a story about the behavior of people whose brains have been surgically split in half to the rest of us who still have a corpus callosum. It's probably stretching my point but you could even say that's a confabulation which in the context of the article seems to be a made up plausible story with no demonstrated basis in reality or even consensus reality.

Since we are explaining ourselves, the best confabulation is all we can strive to achieve.

I think about this often with regards to my mood. Due to brain chemical imbalances I often just feel depressed. But this is tolerable and not the end of the world—the problem comes when my brain starts to invent (confabulate!) explanations for why I feel bad. Maybe it’s because this room I’m in is messy, or maybe it’s because I’m single and nobody loves me. On reflection, these “explanations” are basically always false. But they feel true, which is a trap I work hard to avoid and which I think a lot of people would benefit from thinking about.

Is only one side of the brain able to speak? Is the side of the brain that read the text internally screaming about the wrong explanation coming out of the mouth? How did both sides of the body operate the limbs to close the window if only one side of the body knew it should close the window? I'd love to learn more about this.

Search ”split brain experiments”.

I am so glad for this word "confabulate"! I have found myself doing this many times so it's nice to have something to call it.

I thought the correct term is „rationalize“, but that would only be the (probably flawed) translation from German.

Rationalize would work too. Confabulate communicates some interesting nuances, at least for me, about how this is a creative or synthetic process, whereas when I think of someone rationalizing something, it feels more deliberate and mechanical to me.

AFAIK rationalization is basically the bridging of cognitive dissonance. From that perspective it doesn’t seem to be deliberate.

Maybe deliberate is too strong a word, or maybe I'm using a loose definition of rationalization. I've observed rationalizations that I think were a choice, but not necessarily a conscious or considered choice. When I find myself rationalizing I feel like it often bubbles up and then I choose to push it back down (at least, until I'm ready to admit to myself this is happening, and face the consequences).

There's two people from my past who I think of. One was a conspiracy theorist who moved fluidly and rapidly from one idea to the next, and these ideas were often contradictory. It was a live, synthetic process; they were connecting the dots of different conspiracy theories on the fly. If you pointed out a contradiction to them, they'd spin a new yarn to resolve it on the spot. This is what confabulation reminds me of. There was no destination; it was a dance through fanciful ideas. That's what feels less deliberate to me, it wasn't so much providing a justification or bridging a cognitive dissonance as much as storytelling. (They once told me their epistemology was basically founded in the emotional impact of a story; they believed they had a sort of "truthiness sense" and that what moved them was what was true.) If it were a science fiction story it would have been riveting, but as an epistemology it really limited their ability to understand the world and have relationships.

Another is a friend who I had some difficult conversations with about their behavior, and after some heated discussion I finally got through to them, at least in part. But then the very next day they told me a brand new reason for why they thought their behavior was okay (it wasn't). And that felt like a choice to me. They wanted to do something, and they found a frame of thinking where it was permissible. They definitely bridged a cognitive dissonance, and I don't think they set out to do what they did, but I feel at some level they made a choice.

That being said, I think when I rationalize it's often something along the lines of, "what I'm doing is hurting me, but I can't stop because someone is counting on me to do it," and that's only a choice by the strictest definition. And I can see how my friend might've seen things that way too.


That's not a bad word, either, FWIW. I'd say it's both, with confabulating being the particular method of rationalization.

Nitpick: latin.

To paraphrase Charlie Munger, the bees buzz.

To draw on Robin Hanson, the less you understand why you did something, the better you project to others you did it for the right reasons.


The Righteous Mind by Jonathon Haidt opens with the moral, psychological explanation of this phenomenon: the brain judges first, and explains later. Often times the reasons do not make much sense.

It is some sub-rational decision making process which is more fundamental with which divine introspection (though Haidt remains drier than this as far as I’ve read) may illuminate.

Evolutionarily, the need to make simple decisions about survival quickly and consistently, suffering the minor hardships of obtuse thinking, makes perfect sense - whether you knew it or not.


These are not explanations, they are rationalizations.

Definitions matter.


>people who have a disconnect between the left and right hemispheres of their brain.

>A researcher shows a patient a message in his right eye, saying, “Please close the window.” The patient gets up and closes the window. Then the researcher shows a question to that patient’s left eye, “Why did you close the window?” The patient says he chose to do it because he was cold.

Quite reminiscent of Julian Jaynes’ Bicameral Mind hypothesis.


I am really convinced by this explanation on how... wait

To doubt does not require us to dismiss and ignore.

We can be aware of the phenomenon of confabulation, especially in brain damaged people, yet still engage in sensemaking based in part on self-reports.


> A researcher shows a patient a message in his right eye, saying, “Please close the window.”

Shouldn't that be, "in the right half of his visual field?"


The whole human existence is the attempt to explain things happening around you (even if don't get them) to basically calm down our consciousness.

That’s an interesting take on things. Would you say this condition generalizes to other animals, or unique to ones above a certain level of intelligence?

If anyone is able to discover the referenced "study", please let me know. Or was this just something confabulated by the author?

Sivers is referencing well-known/-docomented neurological phenomenon and behaviors related to split-brain conditions.

https://en.wikipedia.org/wiki/Split-brain

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7305066/


You likely want to follow that Wikipedia article with https://en.wikipedia.org/wiki/Left-brain_interpreter

Ah yes, rule 2 of dating a narcissist.

That is... meaningless drivel, at best.

The author is likely looking at the Sperry/Gazzaniga experiments, and... getting them wrong. The insight was that the left brain fills gaps in information, and yes, it is confabulating. If your right and your left brain lobe happen to be severed.

Our reasoning is far from unknowable in a non-severed brain. Yes, people are sometimes lying, and yes, "actions speak louder than words", but that doesn't mean you should blanket-dismiss explanations.


"When disagreeing, please reply to the argument instead of calling names. 'That is idiotic; 1 + 1 is 2, not 3' can be shortened to '1 + 1 is 2, not 3."

Your comment would be fine without the first bit.

https://news.ycombinator.com/newsguidelines.html


I'd withdraw it, but edit period has passed.

I'd say the statements at the end were a bit stronger than was warranted, and I'd say something like, "be skeptical of explanations" or "be cautious of explanations." I don't think they said that our brains are _like_ people with lobotomies, but that this was an illustrative example.

I've definitely experienced people's stated beliefs continually disagreeing with their actions, and their producing a font of rationalizations when I asked them about it, until I was forced to conclude they were lying to me only because they'd first lied to themselves. Or confabulated, if you prefer.

And I think it's important to understand that this can happen & that one's self can do it too. Which is, yanno, a meaning.


We do the same thing when we're not split brained, but just damaged.

https://en.wikipedia.org/wiki/Hemispatial_neglect

When we have brain damage that causes us to neglect one side, we don't notice that we're neglecting one side. If it's pointed out, we unintentionally give specious reasoning as to why it happened.

This is common knowledge, not based on a particular study, but many of them.

edit:

> Our reasoning is far from unknowable in a non-severed brain.

Are you saying this based on something, or is it just a personal belief? If there's anything we know about introspection, it's that it is untrustworthy.


There is a large difference between "not entirely trustworthy" and "unknowable".

And in both cases - severed lobes, and hemispatial neglect - serious trauma needs to be present to observe this effect.

I do think the changed title here on HN is doing the subject much more justice than the attention-grabbing attempts of the article.


I suppose it's an example of the age old adage, all models are wrong but some models are useful. Our models of ourselves, derived from from introspection and observations others make, are wrong, but if we're honest with ourselves, they'll be a useful reflection of reality.

I think calling it meaningless drivel is a bit much, but you are right that we should be careful drawing inferences.

It's "at the very least" worth considering: What does it mean that under specific circumstances we can be so sure of ourselves about such basic stuff and so totally wrong at the same time?

It might not be a strong inference from split-brain phenomena, but it'd be a shame to /not/ wonder about whether and how much such confabulatory mechanisms are at play in normal functioning brains, then


It is meaningless drivel.

Some people confabulate, memory is not perfect, people with severe brain injuries visibly confabulate, so all explanatory power is meaningless and we should never trust it.

Never mind that some specific lesions to the brain actually provide explanations for how we create narratives and confabulate. Those explanations are equally illusory...


It seems like I've seen an increased number of these low-effort blog posts, with short, pithy, unsupported and almost-content-free "advice."

As commonly as "advice" is a form of nostalgia ("Don't make the same mistake I did.") it can still be useful if read from that perspective. This person isn't giving advice but rather explaining a mistake they made. Any calls to action in the article are just behavioral changes learned by the author after noticing their mistake.

It’s a good conversation starter this one is, at least.



Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: