Hacker News new | past | comments | ask | show | jobs | submit login

I think this is why it's hard sometimes to argue in support of something you believe, even if you're right.

At one point, all of the relevant facts and figures were loaded into your working memory, and with that information you arrived at a conclusion. Your brain, however, no longer needs those facts and figures; you've gotten what you needed from them, and they can be kicked out of working memory. What you store there is the conclusion. If it comes up again, you've got your decision, but not all of the information about how you arrived there.

So when your decision is challenged, you are not well equipped to defend it, because you no longer retain why you arrived at that decision, just the conclusion itself.

It's immensely easier to trust that you arrived at the right conclusion and the person who is in disagreement is missing something, than it is to reload all of the facts and figures back into your brain and re-determine your conclusion all over again. Instead, you can dig in, and resort to shortcuts and logical tricks (that you can pull out without needing to study) to defend what you've previously concluded (possibly correctly, but without the relevant information).

If this finding ends up being generally an approximation of how our brains work, it could explain a lot about what's happening to global conversations, particularly around the Internet and on social media specifically. It also suggests a possible solution; make the data quickly available. Make it as seamless as possible to re-load those facts and figures into your working memory, and make it as unpleasant as possible to rely on shortcuts and logical tricks when arguing a point.




> At one point, all of the relevant facts and figures were loaded into your working memory, and with that information you arrived at a conclusion.

I often say "X was explained to me once and it sounded reasonable, but I don't remember the details anymore."

Sometimes remembering the reasons themselves for X off the top of your head may not be important, but knowing that there are reasons (that you can look up) is.

What the answer is for something may not be as remember as remembering that an answer exists.


One way to make this clear to yourself is to observe how much more difficult it is to "define bread" than it is to answer "is this bread?"


There can be surprising insights yielded from such an exercise. For example, if I think about what separates breads from cakes and muffins, I am forced to deal with the way that a typical "banana bread" (baked with lots of sugar and without yeast) is really a bread-shaped muffin more than a banana-flavored bread. This might seem overly semantic, but it does reflect differences in how it is baked and what it means nutritionally.


The examples that you're structuring your attempted definitions around (banana bread) come from your intuition. In the ultimate limit your definition would be a complete list of your intuitions.


This thought experiment ends with Diogenes running into the Academy and tossing a Guinness in my face. ;)


I have a hard time believing Diogenes would waste a good Guinness like that.


This is more about the fact that we recognize bread, and definition plays no role in the process of recognition. Even if we define what bread is, that won't play a role in our recognition of anything other than maybe-this-is-bread-plus-I'm-being-asked-to-judge-if-it-is-or-not.


Bread is defined as anything I think bread is, and the same goes for any other word. To hold another position would be in some way dishonest.


That’s not a definition :) And, by the way, a definition is not defined as whatever one thinks a definition is.


It is literally a definition: it defines the boundary between what is and isn't bread.

There is a lot of context that is needed to get to a positive identification (maybe the word you meant) of bread, but that is true of many definitions present in dictionaries, etc. today.


Maybe their definition of definition is your definition of bread?


Give us this day our daily definition


At risk of really devolving this thread, I’m pretty sure that bodybuilders generally agree that bread is counter-productive in the pursuit of definition :)


Great, now we gotta figure out what a "bodybuilder" is!


One whose body is sufficiently defined.


"Bread makes you fat??"

~Scott Pilgrim


It offers a one-to-one correspondence between stimuli and classifications, what else could a definition be?


That's not how definitions work. I can't know what your brain thinks bread is. And if you die, I can never know.


Definitions do not have to be computable, even in principle. For example, "a Turing machine that halts" is well-defined although there is no algorithm for classifying things into that bin.



Definitions ideally don't require an oracle.


How then do you define Pythia?


Pythia is a mystery!


That makes you a bread-oracle O, but doesn't define bread.

Since there are some inputs x where O(x0) = False, some where O(x1) = True, and the laws of physics are continuous(yes, even in quantum mechanics), Buridan's Principle implies that you are incapable of deciding the breadness of arbitrary input in bounded time.


I agree that I cannot decide the breadness of arbitrary inputs in bounded time, although I contend that does not stop me from claiming to have defined bread, on the grounds that the set of Turing machines that halt is well-defined but also has the same difficulty you're describing.


A definition doesn't change: The prime numbers or Turing Machines are the same set regardless of who Putin invades next or what law Biden decides to veto.

But the set of inputs that an oracle implicitly defines, could change if the oracle changes. And you could change your mind or die tomorrow.

So you would need a very large number of definitions of bread, indexed by (time, person). Any one of them could be a valid definition - it's theoretically possible to make you look at 1000 pictures of bread so your brain is encouraged to make a bread-detector neuron, and then scan your brain and calculate its response on any input - but you don't know which one is correct to use for any purpose.

i.e. If I want to start a bakery, should I use your current bread-oracle to define "marketable bread", your bread-oracle as of 5 years ago, should I take a statistical ensemble of brain scans from millions of people, or should I use my own?

It seems like just having a function that returns true on some inputs and false on others doesn't tell you much, whereas traditional mathematical definitions have strict relations to other things.


> A definition doesn't change

but they do, the definition of many words changed over time some to even start to mean the opposite of what they initially did.


I don't think this is true? Suppose I define "bread" as "that which has a net positive charge" [1]. Can I not put the bread candidate in an electric field in flat spacetime and measure (the direction of) its acceleration in a bounded time? I suppose I might be depending on its mass being finite, but the observable universe supports that assumption.

[1] I don't think this is a very useful definition of bread.


The original paper in Section 6 looks at the case of a magnetic field: https://lamport.azurewebsites.net/pubs/buridan.pdf

It's a practical issue that affects CPU design, which is also mentioned in the paper under the name "Arbiter Problem".


Remarkably, you are getting downvoted for stating exactly the conclusion of pretty much all philosophical discussion on the matter since the mid-20th century.

Notably, the public reacted similarly then as HN does now, rejecting the notion that meaning is only constructed and, furthermore, hopelessly solipsistic.


This would make it impossible to share definitions (even when we both think all the same things are bread).


It is impossible to share definitions of natural-language words, at least pending advanced brain scanning technology. That's a limitation of physical reality, not a philosophical flaw.


Are you implying that definitions aren't real because they're not physical objects?


I'm implying that natural-language definitions are physical objects, in your brain, made up of brain stuff, and that you can't write them down in ways that are much briefer than a full description of their physical manifestation, although you can roughly approximate them in something like a dictionary.


Then why bother writing these sentences? I have no idea what you mean by them.


>At one point, all of the relevant facts and figures were loaded into your working memory, and with that information you arrived at a conclusion

You are awfully optimistic about the rationality of humans, aren't you? :)


I know this is a joke but it seems unnecessary. Most people actually do use evidence and logic to arrive at their opinions. The problem is some people are presented with incorrect or fabricated evidence. Some people draw incorrect conclusions, or maybe some of the evidence is above their head so they ignore that when it's vital to proper understanding. Some people aren't particularly good at logical thinking, or never progressed past introductory levels.

This is all why you can show identical evidence to a group of people and get multiple, sometimes very different, opinions.


> Most people actually do use evidence and logic

That's not how humans function.

They are social animals and copy the opinions and beliefs of those they want to be (stay) friends with.

Being part of the group is what matters, evolutionary, not logics and being right.

And to influence others, step 1 is to make them look at you as a friend. There's a book about that :-)


“Man is not a rational animal; he is a rationalizing animal.” -- Heinlein


"Most people actually do use evidence and logic to arrive at their opinions."

They do not. The brain is a machine of lies designed to keep you alive, rather than arrive at some pure truth. The vast majority of your brain power is subconscious. Your brain is extremely good at arriving what it needs to know, not at knowing or truthfulness in general.

It takes an incredible effort in critical thinking (which does not come natural) to unravel the layers of misdirection and crap your brain has produced in order to come to a kind of objective truth. It's such a headache inducing process that few will undertake it. Even more so when the outcome of critical thinking is typically uncomfortable.

Perhaps more unsettling is that even the very concept of you is a lie. Not your body, obviously. Your inner self, your identity if you will. You think you're some kind of well defined, consistent character. Carved in stone. One could perhaps summarize you in 10 bullet points and this idea of you is pretty stable over time. That's how you know it's you.

In reality, the brain has established this concept of you because it's in your best interest. Every little piece of input, thought or memory that directly contradicts it (which is constantly) is carefully dismissed whilst the confirmation of the false belief is amplified. Not because it is correct, because it is preferential.

I'm happy to leave you in this confused state on a random Tuesday. You can now think that this guy is full of shit, which proves my point of your brain filtering information that is not in your best interest. Or, you can agree. The outcome is the same. I'm right. Or, rather, my brain thinks it is. Which is what brains do. It's a defensive organ.


I have a feeling if OP had read some of the papers surrounding Daniel Kahneman (and the works Kahneman cited) he wouldn't be so sure about mankind's rationality.

It's like the vast majority of experiments on the subject ends up with "and then they proceeded to use their intuition and who they like more to make their decision".

Also, I think it was "Classical Rhetoric for the Modern Student" that also said that logical arguments are the weakest kind of rhetorical arguments since basically anything else is more likely to convince people.


Interesting thought. Perhaps that is also why people sometimes have a hard time changing their mind when confronted with new information: a certain number of bits of information have led you to your belief, and even if some of those change or turn out to be false, you can't access those bits anymore individually, but only the resulting belief.

Perhaps, the more those beliefs are reinforced, the less likely you are to access it's constituents. Sounds a lot like inductive bias, but somehow different from ML.


> why people sometimes have a hard time changing their mind when confronted with new information

Something else happens with me, it's like my brain says "this does not fit in with what I understand, discard it". At a conscious level I don't hear what I've just been told. I have to be told it again, and sometimes more than twice before it finally works its way in. It's a liability for me and a frustration for others and it's just plain peculiar.


I don't think this is too uncommon. I sometimes go through such a phase, also in reading, and what helps me get back on track is to do things really intently for a while. And I mean even basic things, being really aware of what I'm doing and thinking in that moment.

When you dont pay attention to what is currently happening, it's usually that your mind goes on tangents. I'd recommend becoming aware of those tangential thought processes. Mindful meditation may help a bit.


"Would I not need to be a barrel of memory to also remember all my reasons? It is hard enough to remember just my opinions themselves!" -Nietzsche in Thus Spoke Zarathustra


The solution I use is to take notes.

I don't think the conversation on the social media is based around data. Most data points that people have are inaccurate (if not false), taken out of context, or used with an incorrect mental model. Once someone states something on social media, it has usually been taken on a viewpoint: at that point data is generally viewed with a confirmation bias type approach.

I am wondering if there is a way to teach everyone to separate facts from values. The facts are the most important part that should be maintained separately (you can do this with notes). Then we need to recognize that different individuals will apply different values and focus on transmitting facts in discussions and let everyone apply their own value system.


What you described is called scientific method.

It'd need good STEM education in young age, not shy on math, or at the very least doing computer programming professionally at some points of life.

Good luck finding those in the last couple generations in the West.


Which is also why I think using facts to convince others is a Sisyphean endeavor. It is far more rational to learn rhetoric when you have to argue. Learn to wield fallacies like a weapon.

Of course, this relates back to good-faith, bad-faith engagement. Wielding rhetoric like this constantly deters people from engaging in good-faith, so you also have to develop a heuristic to determine whether or not the individual challenging your assertions is worth engaging in good-faith in the first place.


I've found that 100/100 people just get offended and/or pissed and retreat to their amygdala if you point out a fallacy in their logic. It certainly doesn't help that many people pointing out logical fallacies are in fact wrong (and fallacious) themselves (the "you're using a slippery slope fallacy" for example is fallaciously used all over the place).

I'm becoming increasingly convinced that good faith engagement is essentially impossible. The only reason I engage at all anymore is for the third party that might be an honest seeker who may stumble upon the thread at some point in the future.


>I've found that 100/100 people just get offended and/or pissed and retreat to their amygdala if you point out a fallacy in their logic.

And I am sure I've been guilty of this before, many many times. Being challenged is not a comfortable position to be in. I have since learned to weaken my position to give myself and others some leeway when one of us is wrong.

>I'm becoming increasingly convinced that good faith engagement is essentially impossible.

It is certainly getting more difficult. I think it is still useful to engage with individuals in your chosen social circle honestly and in good-faith, otherwise why are they in your circle in the first place?


Favorited this comment for when my brain remembers "people argue online because of how our memory works", but not exactly how I arrived to that conclusion.


This also make sense how one can hijack someones brain into believing something even if they don't understand why it makes sense


It's extremely difficult to maintain a database of __all__ the citations for __anything__ you ever adjudicated (reached a decision).

Making things more easily findable and a database of debunked lies might be better.

Also great would be training (for anyone) on how to spot 'magic tricks' in debates / information presentation. E.G. how things might be cut down, remixed, or staged to create something that at a glance is convincing, but with closer examination could just be gaslighting.


> It also suggests a possible solution

Is there a problem? The so-called global conversation concern seems to be simply that some people have differing feelings and their feelings push them to want others to share in the same feelings. To 'solve' for those feelings of some implies that their feelings are of greater importance than the feelings or others, but that seems pretty wishy-washy.



Another potential upside of a brain to computer interface (Neuralink), the ability to store every memory you have ever had (while the device was installed) in full resolution.

Assuming of course you maintain a server rack at home with copious amounts of hard drives.


People will still argue that self-hosting is too hard so you might as well just accept that Evil Corp is gonna be the central store of all memories (with a great proprietary format!). Better not think of anything that violates the terms of service.


(...and we figure out how to that which is uhh...not close).


> full resolution

What, generally, do you think this might mean?


The ability to experience a memory as precisely as you want, including the option of a full mental transplant, like loading a save file for a video game. See, hear, touch, smell, taste, and think the exact same thoughts as you did 15 years ago. The playback mechanisms will have some caveats, as it may not strictly be possible to playback perfectly, as you are a different person with a different brain and body than say 15 years ago. You could relive something in the first person perspective, or perhaps just observe yourself from a third person perspective.

To a lesser degree, just being able to hear the dialogue in your brain at the time of a memory would be monumental. Then you can get into the business of using tools built around this, such as searching your memories, computing statistical analysis (maybe you can find out why you haven't been able to commit to an exercise habit for the past 5 years?), and so on.


I have aphantasia, so my experience of memories is generally closer to factual recall than sense experiences; additionally I don't have an inner monologue. Which is sort of why I asked: memory is not necessarily a record of our sense experiences. Keeping an arbitrarily precise record of our sense experiences would be quite cool and useful, but that would necessarily be a different physical process than memory, and any "memory" generated from that data would only be an interpretation of what that sensory experience might have been.


When I mean memory I mean every possible electrical signal in the brain, including sensory input. Maybe it won't be possible to "see what your eyes saw 10 years ago" directly into the brain, but perhaps you could render it on a monitor?

When it comes to not having an inner monologue, that complicates the example, but I think it's still possible to work with actual memory. What I suggested was a tool to search your memory by tapping into the words from the inner monologue, but if you don't have that available, you can still search the signals of the brain, it would just be less comprehensible. Say you're trying to quit smoking, you could pattern match the brain signals that are present when you have a craving by checking historical data, and pipe that feedback into a controlled release nicotine patch designed to slowly taper you down over a few months.

Edit: While that particular use case doesn't sound exciting (why not just use a regular patch?), I don't think it's because the possibilities aren't exciting but more so I'm just not the best at imagining what the use cases would be specifically.


I had that thought this morning, knowing I have to present at a design review today!

I think the boring solution is to take written notes when making decisions. As an engineer, I find that architecture documents are very powerful and always worth while.


Insightful. Thanks.


Sometimes I find that the solution to some questions is so complete that I don't even remember what my issue was originally.


This justifies all the hours I spend on HN. :)


This is a very astute point. But I would also add that, IMO, you only ever even perceived reality as a compressed summary.


This is why verbal debates are bad.


But I rather have event sourcing and save every single event in my brainy and rebuild the current state of my memory weekly or monthly.

All those inaccurate facts you learn would be gone and would not pollute your brain




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: