That may be the case for some people, but it isn't the typical case for me. Quite the opposite. My subconscious drives the show and after the fact if someone asks me about something or if I'm forced to confront why I did things in the very unlikely way I did them my conscious mind tries to create an explanation for the unconscious actions I've already taken.
For me, if there is a moral quandary or a highly analytical situation where the subconscious can't quite estimate it then it gets kicked up to the conscious layer for a real, expensive (computationally speaking) decision.
Something like "do I really want to flirt with this woman, given that I'm married" or "wait a second the abstraction here is leaking across domains which may ruin the architecture of this program in the long run" get kicked up to the upper layer. But for most of my programming, I honestly just keep it at the subconscious layer and so long as there is music that the conscious mind can check out on we're good.
But the subconcious also handles two other major functions that are not involved in plausibility generation.
1. Prediction. Not just "predict something plausible" but "predict the time down to the second" or "predict what people are about to say on this podcast I'm listening to" and when it gets it uncannily right or surprisingly wrong it generates laughter of some kind.
2. Tacit learning. The type you only get by bathing in a topic for a good long time so that connections can form that are below a linguistic layer and to surface them requires abstract philosophical pondering after the fact.
There are other functions (snap judgements, feeling generation to guide strategic choices, creativity, and um, "connection") but those three together I would say comprise the bulk of the subconcious experience from my perspective.
 Some disagree that this subconcious. All I can say is that they're 100% wrong for me.
 Something I wish I had a word for but it's essentially the feeling and thinking generated by joining yourself with a piece of technology like a bike or a computer or a car. Its as if ones exoskeleton is donned and the neural pathways update to take it on.
There's two people from my past who I think of. One was a conspiracy theorist who moved fluidly and rapidly from one idea to the next, and these ideas were often contradictory. It was a live, synthetic process; they were connecting the dots of different conspiracy theories on the fly. If you pointed out a contradiction to them, they'd spin a new yarn to resolve it on the spot. This is what confabulation reminds me of. There was no destination; it was a dance through fanciful ideas. That's what feels less deliberate to me, it wasn't so much providing a justification or bridging a cognitive dissonance as much as storytelling. (They once told me their epistemology was basically founded in the emotional impact of a story; they believed they had a sort of "truthiness sense" and that what moved them was what was true.) If it were a science fiction story it would have been riveting, but as an epistemology it really limited their ability to understand the world and have relationships.
Another is a friend who I had some difficult conversations with about their behavior, and after some heated discussion I finally got through to them, at least in part. But then the very next day they told me a brand new reason for why they thought their behavior was okay (it wasn't). And that felt like a choice to me. They wanted to do something, and they found a frame of thinking where it was permissible. They definitely bridged a cognitive dissonance, and I don't think they set out to do what they did, but I feel at some level they made a choice.
That being said, I think when I rationalize it's often something along the lines of, "what I'm doing is hurting me, but I can't stop because someone is counting on me to do it," and that's only a choice by the strictest definition. And I can see how my friend might've seen things that way too.
To draw on Robin Hanson, the less you understand why you did something, the better you project to others you did it for the right reasons.
It is some sub-rational decision making process which is more fundamental with which divine introspection (though Haidt remains drier than this as far as I’ve read) may illuminate.
Evolutionarily, the need to make simple decisions about survival quickly and consistently, suffering the minor hardships of obtuse thinking, makes perfect sense - whether you knew it or not.
>A researcher shows a patient a message in his right eye, saying, “Please close the window.” The patient gets up and closes the window. Then the researcher shows a question to that patient’s left eye, “Why did you close the window?” The patient says he chose to do it because he was cold.
Quite reminiscent of Julian Jaynes’ Bicameral Mind hypothesis.
We can be aware of the phenomenon of confabulation, especially in brain damaged people, yet still engage in sensemaking based in part on self-reports.
Shouldn't that be, "in the right half of his visual field?"
The author is likely looking at the Sperry/Gazzaniga experiments, and... getting them wrong. The insight was that the left brain fills gaps in information, and yes, it is confabulating. If your right and your left brain lobe happen to be severed.
Our reasoning is far from unknowable in a non-severed brain. Yes, people are sometimes lying, and yes, "actions speak louder than words", but that doesn't mean you should blanket-dismiss explanations.
Your comment would be fine without the first bit.
I've definitely experienced people's stated beliefs continually disagreeing with their actions, and their producing a font of rationalizations when I asked them about it, until I was forced to conclude they were lying to me only because they'd first lied to themselves. Or confabulated, if you prefer.
And I think it's important to understand that this can happen & that one's self can do it too. Which is, yanno, a meaning.
When we have brain damage that causes us to neglect one side, we don't notice that we're neglecting one side. If it's pointed out, we unintentionally give specious reasoning as to why it happened.
This is common knowledge, not based on a particular study, but many of them.
> Our reasoning is far from unknowable in a non-severed brain.
Are you saying this based on something, or is it just a personal belief? If there's anything we know about introspection, it's that it is untrustworthy.
And in both cases - severed lobes, and hemispatial neglect - serious trauma needs to be present to observe this effect.
I do think the changed title here on HN is doing the subject much more justice than the attention-grabbing attempts of the article.
It's "at the very least" worth considering: What does it mean that under specific circumstances we can be so sure of ourselves about such basic stuff and so totally wrong at the same time?
It might not be a strong inference from split-brain phenomena, but it'd be a shame to /not/ wonder about whether and how much such confabulatory mechanisms are at play in normal functioning brains, then
Some people confabulate, memory is not perfect, people with severe brain injuries visibly confabulate, so all explanatory power is meaningless and we should never trust it.
Never mind that some specific lesions to the brain actually provide explanations for how we create narratives and confabulate. Those explanations are equally illusory...