Quickly scanned through the references and didn’t see one interesting perspective: Julian Jaynes’ theory of the “breakdown of the bicameral mind” in response to increasing societal complexity.
His idea is that the voices of “gods” in premodern societies were literal command hallucinations experienced by members of these societies, in the place of having self-identified conscious thought. In a way, he argues, premodern peoples could be considered to be “not conscious”, as they had no inner locus to consider or contradict the voices they heard. This contrasts with the “conscious” experience, in which we personally identify with a sort of homunculus inside our heads that is watching and judging its surroundings.
Jaynes argues that societal upheaval and complexity led to changes (maybe cultural, maybe biological) that caused the “voices” to disappear, becoming absorbed into a new “conscious” stream of self-identified thought. From the perspective of societies experiencing this change, their inner gods would have been thought to have gone silent, leading to greater identification with (and societal organizing need for) externalized, idolized god figures.
The hypothesis is completely implausible, and it is quite irritating that it keeps popping up in discussions in the 'hacker' sphere.
There is simply no way that bicameralism would be the dominant phenotype only a few thousand years ago, and be completely absent in any present-day population, anywhere. We would still see it in isolated populations.
What I find interesting about this hypothesis is that consciousness is just a stream of thoughts which we do not control. Like whenever you think something it just arrives to your mind. Often we experience thought as spoken words in our mind but we don't know where they come from. They come from the subconscious but we are not aware of that part of ourselves. In dreams we have information expressed to us from the subconscious through symbolic representations. What if we could talk to our subconscious mind? When thinking about this I allways remember how John Nash described his paranoic ideas comming from the same voice his mathematical ideas came from.
> What I find interesting about this hypothesis is that consciousness is just a stream of thoughts which we do not control.
I would define consciousness as that which keeps the body alive, reacting to changes in the environment and actions of other agents.
Consciousness is a perception-judgement-action-reward loop. It's a reinforcement learning kind of situation, where the agent, embodied in an environment, learns to act in such a way as to maximise its rewards. We have been programmed by evolution to favour certain kinds of rewards which are good for keeping us alive.
This definition is much more concrete and practical - it has been applied with some degree of success in designing AI agents. It gives a purpose and a mechanism to consciousness.
The consciousness loop is formed of sensing, judging, acting, learning from sensation and reward. Sensing is like deep learning - a neural net converts raw sensory signals in high level representations. Sensory learning is mostly acquired by reconstruction or self supervision. Judging is like RL - where an agent learns to predict the future rewards following its current situation and actions. It can be model based (reasoned) or model free (instinctual). Acting is like the feedback based motion planning algorithms in robotics. Reward learning happens in the background of all these processes. When you put all these systems together, and inside a body, inside an environment, you get consciousness.
In the end it's just the competition of genes for survival, self replication in a limited resource environment over billions of years.
Nash said "the same way", not the "the same voice":
“the ideas I had about supernatural beings came to me the same way that my mathematical ideas did. So I took them seriously.” - Sylvia Nasar, "A Beautiful Mind"
Thanks. Fair enough, but he did say "they came". And if you think about it he does not specify "the way". I remember that he heard voices. Because he said that to get cured he just stop listening to the voice or voices.
> no way that bicameralism would be the dominant phenotype only a few thousand years ago
It would be cultural and taught, not genetic.
There are non-isolated populations: I come from a religious background where the idea that God "talks" to you is quite prevalent. If you want an analysis of how could possibly be the case in our modern society, I recommend Tanya Luhrmann's "When God Talks Back".
I say Jaynes puts the transition as far too sudden compared to what it actually could have been. Individuals could have had developed his version of "consciousness" often and repeatedly, but failed to pass it on into greater culture. I think of literacy as an analogous transition; until it's actually of benefit for the average person to read+write (ie society has moved beyond majority subsistence), it's not going to happen.
Are you saying the Bicameral Mind could be a result of a viral infection? And that virus could have disappeared in the last two thousand years? Or are you saying the phenotype could have disappeared?
This is an idea which has been popularized by the new TV series Westworld. It is interesting, however it is not the topic of this paper, which does look instead into sociological perspectives rather than an internal psychological theory which is difficult to be approved.
Jaynes' theory may have merit, but it isn't that relevant to this paper, which is an empirical research paper on social data and theories.
I know the bad connotations of schizophrenia are more present in modern societies, but still, I can't see how a primitive group of schizophrenics would be able to thrive.
Is there any proof or actual need for this bicameral mind other than trying to explain the religious phenomenon?
Who says we're not "schizophrenic" now? Our common parlance attributes agency to blind chaotic forces in the universe; our visual systems are quasi-hallucinogenic in their mechanisms (making all sorts of "optical illusion" mistakes); and while we like to think we're singular, holistically consistent beings, our actual behavior is often better described by nested (and competitive) sub-agent neuron clusters: https://meltingasphalt.com/neurons-gone-wild/
It's worth considering that Jaynes' argument has as much to do with language as religion, positing a tightly coupled relationship between (spoken) language and subjective consciousness/qualia. "Gods" were in part an explanatory mechanism for the same phenomena of consciousness and (feeling of) decision-making that we experience today, which we now explain via "Self" narratives. In other words: rainbows, and our experience of them, did not fundamentally change, only our social stories about rainbows (God's Covenant -> Light Refraction).
Assuming one accepts a relationship between consciousness and language, it's also worth considering that language is a socially viral phenomenon, and so it makes sense that we might externalize consciousness explanations into a social space, via the common knowledge of our community's tribal gods, or a shared animist experience of nature. The innovation of the Self is a relatively recent one in this model; and even then, each of us frequently make the mistake of modeling other Selves after our own Self (the only god the secular world honors), resulting in biases like the Curse of Knowledge.
At any rate, Jaynes' argument is quite detailed and thorough, and difficult to make concisely. I found "Origins of Consciousness" to be a thought-provoking read, but I'm still not sure I'm willing to stake a strong claim For or Against the model.
I understood it to be less like schizophrenia and more like early peoples who were not aware of their internal monologue. Their own thoughts were perceived as the command or voice of Gods because how else would they be able to, "hear" words that were not said by another around them?
It's beyond me how you could ever show evidence for that, beyond reading between the lines of ancient text (which we know is one of the least reliable forms of historical evidence ever invented).
This isn't evidence, but I used to be religious and there was a certain religious mindset in which the internal monologue became more like an internal dialogue. Given this, and that IIRC there is a part of the brain that has been associated with religious experience, the bicameral mind seems at least plausible, though maybe not exactly as originally posited.
I posted a similar comment above on how religious approaches give bicameral theory some plausibility: people genuinely do think God talks to them, even today.
I wouldn't say my inner monologue is speaking to me in my native language. That's just how my conscious mind tries to interpret it.
I can 'think' without speaking to myself internally. My 'inner monologue' doesn't always express itself in words. I think people are reading too much into the 'monologue' part and ascribing too many actual conversational qualities to it.
> I think people are reading too much into the 'monologue' part and ascribing too many actual conversational qualities to it.
You're also making assessments of how other peoples minds work without having any perspective outside how your own works. My monologue is fairly conversational. I also don't need to be thinking, "out loud" to think. That seems like it would be incredibly slow. Much of my day to day doesn't require active thinking. However, my internal monologue comes in when I've got a meaty problem to work over. It's like talking the idea out with another person.
It probably varies between people, both actual experience and interpretation. I find that I have a very verbal "stream of consciousness" which I would label my "inner monologue" but which certainly doesn't constitute all of my thinking.
Huh, published in 1976. Somehow I'd assumed that this sort of hypothesis would have been proposed back in the 1930s when it's assumptions would have been a lot more credible.
As Dawkins wrote in The God Delusion, "It is one of those books that is either complete rubbish or a work of consummate genius, nothing in between! Probably the former, but I'm hedging my bets."
I'm really surprised that he and Dennet are so positive on the idea. In the humans running around today we can do experiments on we can clearly see that it's the same process that allows us to serialize experiences for to speak about them and serialize them to remember them. But I suppose before fMRI that work might not have been as convincing.
And I suppose, as someone who does a lot of thinking without an internal narrative I feel a bit attacked by the whole idea.
>And I suppose, as someone who does a lot of thinking without an internal narrative I feel a bit attacked by the whole idea.
When ideas like this and Sapir-Whorf pop up it really makes me wonder if maybe the social "sciences" are studied exclusively by some subset of the population that thinks entirely different from me. How else would they come to the conclusion that thoughts take the form of articulated monologues? Maybe I'm the outlier but it takes deliberate effort for me to serialize my thoughts into anything articulate. I mean do other people just hear "I didn't give the correct arguments to this function" in their head when they code?
When I'm doing serious thinking about software design / architecture, or debugging/problem-solving, I often find the I have a kernel of an idea which seems fully-formed but inexpressible. I can "see", in my mind's eye, the different interactions between components (floating around in sort of a three-dimensional space), but it's often hard to put it to paper.
For me, the hardest work I often do is to describe my ideas in English text. Source code is somehow easier in some way; possibly because it's more concrete and less "squishy".
When philosophers sit down and ask themselves, "How do I think?", they invariably end up concluding that consciousness is an internal monologue. A monologue is one of the many capabilities of the mind, but it's highly overrepresented in philosophical writings because the subject is himself trying to formulate the answer in words while doing it.
If instead you developed your theory of the mind while observing people performing tasks, you might see occasional evidence of inner monologue but it wouldn't be the dominant paradigm.
I doubt they do, but have you never had a sort of "heart-to-heart with yourself"? I'll sometimes take a walk with the explicit intention of having a good think, and whenever I do that, my thoughts absolutely take the form of internal monologue. I think if you're really thinking something through, on a subject you haven't really explored, the 'voice' manifests. Just my .02.
People are different. There's a wide variety of extents to which people perceive their thoughts as comprising of words and also a wide variety of extents to which people are able to form mental imagery.
For me, most thinking isn't verbal unless I'm thinking of explaining my thoughts to some one else. Sometimes this is annoying when I've worked something out for myself and I go to explain it but then this nice chunked[1] concept for a step but when I hit that concept I realize it doesn't correspond to a word and I have to go on a long, unexpected digression. But sometimes there are words for which I haven't chunked the underlying concept well and I just use that word in isolation inside my head.
Also, some seem to be basically unable to form mental imagery and often assume that people are speaking in metaphor when they describe this. Some people are able to form mental imagery with fairly high fidelity to what their eyes see with very little effort. Personally I have an easy time with shapes but things don't tend to have colors unless I make a conscious effort.
I sort of suspect that people who refer to people by name inside their heads have an easier time remembering names but AFAIK it hasn't been studied. Likewise, I suspect there are correlations between mental subjective experience and what jobs people take. More research is needed, clearly.
I wonder how much of the mental thought process is genetic vs learned, because I can totally relate to your description.
You mention colour, which is interesting, because I was never attuned to colour until I worked as a photographer for a decade where it actually matters beyond just symbolism (ie blue sky = nice day, red light = stop). Now it is often prevalent in my thought process.
I would say my own thinking is a wild mix of language, maths ('just' another language really), graphs and other visual representations. Nigh impossible to form it into a coherent linear narrative.
So basically while the conscious mind was evolving, people originally took that mind to be an external voice? Is it not totally automatic that people can control their conscious mind?
Well, consider maybe that it has to do with a couple of factors, not just learning, but opportunities to learn, as artificial environments (homes) become more refined, and there's more opportunity to contemplate upon abstractions.
Not only that, but capacity to contemplate, that is to say, relative intellect has likely been augmented by our capacity to self domesticate within such environs.
On the on hand, you have the mirror test for self awareness. But it's tough to construct a comparable structured assessment for self awareness that measures up to the mirror test for self awareness. On the one hand, operating mirrors is an acquired skill. Sentient beings need exposure to a mirror for some time, before they learn to reconcile how to use it for self examination. But without a mirror, other stand-ins, like shadows, masks and palm prints start to reveal an awareness of identity. So we can look to those sorts of artifacts to resolve whether a remote indigenous society has begun to self recognize, in general, for all neary all healthy adult members at large.
But, also consider our evolutionary effect on how smart dogs are. Dogs are pretty smart, in possession of emotion, highly attentive to mood, cooperative, and recognize their own identity according to name. They recognize their own scent, and prioritize that factor of self above their appearance in mirrors, which they eventually learn is not another dog. They also dream, which probably means some dogs have vivid internal processes, even if they don't have an internal monologue.
Human societies have writing though, and it's probably writing that leads to complex societies, and of course, writing enables artifacts that record oral traditions, and eventually theology and mythology.
Really, though, look at it this way. Oral traditions are about entertaining an audience, but eventually, the written word descends into virtue signalling, as intellectuals paint each other into corners.
Ever get caught by the wrong person with a diary entry that spills your guts, telling all about what you think of everyone? How many times did you make that mistake?
That's what this reveals. That the intelligensia gains a collective recognition that failure to moralize is a form of checkmate in the court of public opinion, and virtue signaling in text is the race toward a limit (race to the top, race to the bottom, whatever). Intellectuals start to notice how blackmail works, and stop writing about the evil deeds they (may or may not) understand.
Not all human societies have had writing and even fewer had it in any significant way. Secondly, oral traditions are not always about entertainment nor are they only recorded by writing.
There's a very common trap when considering the past to look for "logical causation". It's almost always incorrect to do so and leads to this sort of wild speculation.
A key component of Jaynes' definition of "conscious" is introspection. You could still learn and do other things normally, but that little voice that tells you things like "don't do this" was attributed to the gods instead of introspection.
It's important to take this with a grain of salt. Looking at the fit of the model itself, it seems to be a bit overconfident/overfit to the current configuration of society: https://twitter.com/babeheim/status/1108737930804293632
That seems a pretty devastating critique. The model is so confident it takes hardly any concrete data points that don't match to disprove it. (Note I'm speaking math in that sentence, not English.)
I haven't read the paper yet, but this idea shouldn't be terribly surprising. Augustine of Hippo did a pretty thorough analysis at least of the Greco-Roman gods in the early 400s and wrote about how pagan gods generally aren't moralizing, and how the punishments or favors they give often have little to do with how moral a particular person is and the behavior of the Greco-Roman gods is also not an example worth following. One of the arguments that he and others made in favor of Christianity is that in our faith, the religious and moral come together, and that this was a mostly novel concept at the time.
Greco-Roman gods absolutely punished immoral behavior. It's just that "what is moral" has changed a lot over time. For example, the classical Greek period was a time when pederasty and war were considered perfectly moral. It's often pointed out that pre-Christian morality seems to differ considerably from post-Christian morality.
Also, Ancient Greece/Rome definitely count as "complex societites".
I think the OP's point is crudely valid. Greek gods basically wanted you to pray and appease them, more or less. Jesus and the Buddha wanted you to be a fundamentally better person, moreover, they were themselves enshrined in a metaphysical spirituality.
I always found it odd that Greek philosophers spoke of 'spirit' and 'soul' but that seemed for them to be a completely separate domain from Zeus et. al.
Here's chapter 1 of 'Plato's Republic' [1]
Talk of 'soul' and 'appeasing the just/unjust god's' seem to be a different kind of subject material, whereas in most modern religions they are inherently related.
I often wonder if there's a populist bent to that: a 'soul' is an abstract, intellectual concept, removed from our daily lives. It'd be hard to get the plebes even to pay attention to such haughty ideas. But give them some Gods and Goddesses warring at each other's throats, tales of jealousy, death, intrigue ... now there's something they can buy into.
> I think the OP's point is crudely valid. Greek gods basically wanted you to pray and appease them, more or less. Jesus and the Buddha wanted you to be a fundamentally better person, moreover, they were themselves enshrined in a metaphysical spirituality.
"Being a better person" means different things in different moral frameworks. Plato fought in wars. Jesus didn't. We don't live under a moral framework where "dying in battle is the ultimate virtue, regardless of which side you were on". The Greeks did, though.
It's unclear to me how anyone familiar with Greek mythology could make the claim that the Greek gods were unconcerned with morality. The story of Sisyphus is a moral lesson. Greek myths are full of that stuff: characters are punished for some sin or rewarded for some virtue.
I don't think anyone is suggesting that Greek mythology is absent any kind of moral impetus.
But the Greek Gods seem fairly petty, jealous, vain etc..
Their description in the Ilyad puts them as 'superhumans' more than anything, there is definitely no overarching metaphysical appeal to either morality, or towards stoic search for meaning of life etc.. It's like they're just the superhuman overlords humans have to deal with.
Homer wasn't the only person writing in the classical period. His contemporary, Hesiod, depicted gods as immortal arbiters of justice in Works and Days.
The Ilyad clearly has struck some narrative gold, but was in no way the only manner in which the pantheon was portrayed.
I haven't read the paper yet, but the headline/summary keeps getting bandied about reddit, imgur, facebook, and twitter.
The implication seems to be that because complex society formed without complex religion(s), said religions are superfluous and maybe even unnecessary or deleterious to said society. I may be reading the implication incorrectly.
If I am not, then I'm quite puzzled by this line of inference. If complex society formed prior to complex religion, and the only complex societies still around* today hold, or used to hold until very recently, a complex religion as a core element of societal identity, would the implication be that complex religion is an evolved trait of successful complex societies?
*Around of their own volition, not relegated to reservation regions or whose cultural pelt is being worn by its vanquisher.
You're moving the argument too far. As complex societies developed without a moralizing god, religion is not necessary for morality. You can't use this to then say religion is unnecessary, as nobody has said morality is the only reason for religion.
Even today, many religious people will say that atheists are incapable of having morals. This is evidence showing that claim is ridiculous, not an attack on religion.
Atheists can certainly act morally, and many do. But I would say that atheists don't have a basis for having morals.
dictionary.com says that morals have to do with "right conduct". But in atheism, there is nothing with the authority to give you a definition of "right". There's nothing absolute in the way that, say, the Christian God is absolute, except the universe, and the universe doesn't tell you what right conduct is. (I mean, there are the laws of physics, but that doesn't really help.)
So the atheist usually takes something non-absolute and makes it an absolute - society, or the human race, say. The choice is probably whatever corresponds most closely to the atheist's innate sense of moral right and wrong. And that gives that particular atheist a basis for morals within their own thinking. But "I chose this standard because it matches my sense of what's right, and then I think my conduct is right if it matches the standard" is, essentially, "I do what I decide is right". It's something weaker than what I think of as morals.
The presence of a god effectively changes nothing.
Suppose there is a god that has defined right and wrong, how has this knowledge been passed to mankind? If it is innate, then an atheist would have the same morals as everyone. If there is no knowledge given, then an atheist's innate sense of moral right is as likely to be correct than any others.
The third option is that they come from religious texts, in which case you are picking a standard as it matches your sense of what is right. Even if the morals are absolute, our inability to accurately find them makes them indistinguishable from an atheist standard.
>But I would say that atheists don't have a basis for having morals.
>So the atheist usually takes something non-absolute and makes it an absolute
Okay, so that's a basis. The only issue here being that you don't consider a non-absolute basis to be valid. However, the deity of choice isn't absolute either - you have to deem it so in the same way that the atheist does. So either way there's no perfect ontological solution provided by either path.
> The only issue here being that you don't consider a non-absolute basis to be valid.
Well, it's not just me. A lot of people have, going back at least to Plato. Plato thought that you needed to have something absolute for there to be morals, but his problem was that his gods weren't big enough to proved that.
> However, the deity of choice isn't absolute either - you have to deem it so in the same way that the atheist does.
Yes and no. Yes, you have to deem the Christian God, say, to be absolute in order for Him to provide an absolute basis for your morals. (This is true whether or not that God exists.)
But the Christian is not in the same position as either Plato or the atheist. Within the Christian's position, God is absolute, and therefore can provide a rock-solid basis for a moral standard. In contrast, Plato's gods were inadequate as a moral basis, both in theory and in practice (that is, in practice they often behaved in ways that were not very moral).
The atheist has an absolute (at least, I think most atheists do) - the physical universe. Within (most flavors of) atheism, that's the only thing that can be really absolute, but it provides no moral standard whatsoever. (C. S. Lewis remarked that "what the universe is doing" is working toward the final and irreversible extinction of all life forms, and taking that as a moral standard would leave suicide and genocide as our only moral values. It seems better to just say "there's no moral standard there".) The atheist can pick a different basis for a moral standard and ascribe validity to it - say, the human race, or the atheist's society, or whatever. But the atheist has made a somewhat arbitrary choice - there's nothing within the atheist's position that compels that particular choice of moral basis. What's more, the atheist knows it. This is very different from the Christian's position, where there is one clear moral basis that can only be accepted or rejected.
> It seems like you're making an implicit distinction between deciding to be a Christian and accepting God
I wasn't intending to, but perhaps I should have been. A Muslim has the same situation as a Christian - somewhat different morals, but the structure of the situation is the same. The distinction I was drawing was between polytheistic gods and a monotheistic, absolute God.
> Yes, the position they just chose. It's like saying after you've made a choice, you can either take it or not
I see the "choosing the Christian position" as parallel to "choosing the atheist position". But my point is that, when you choose the Christian position, that defines morals for you. When you chooses the atheist position, that does not define morals for you.
> see the "choosing the Christian position" as parallel to "choosing the atheist position". But my point is that, when you choose the Christian position, that defines morals for you. When you chooses the atheist position, that does not define morals for you
The parallel to "choosing the atheist position" would be "choosing the theist position," which would not define your morals either. From there, you can choose from a massive range of Christian positions which have different morals.
It could be that moralizing religion is not essential to the emergence of morality, but is essential to its long-term survival. For example, maybe morality and religion imply each other, and it just happens that most societies discover morality first (on practical grounds) and follow through with the other consequences in order to keep it.
There are plenty of (complex! organized!) religions where all the gods just... do what they do, rather than being optimizing forces that seek to punish humans when they "step out of line" of the god's desires. The religion of the Ancient Greeks is kind of like this (in that only some of the pantheon have any moralizing character; others just "party" or "move the sun", etc.) The beliefs of some aboriginal American tribes are also this way. And Sumerian religion was definitely this, from what we know of it.
In such religions, "Gods" have two faces: one as heroic figures in a creation narrative (which has little relevance to day-to-day life), and the other as explanations for natural phenomena. Such religions don't usually distinguish between modern religious concepts of "god", "demigod/angel/demiurge/deva/djinn", and "heroic human": anyone from history could somehow be responsible for any random thing the world does now, no matter if they were just a regular human in their part of the creation narrative. (The reductio ad absurdam version of this belief system is the "caricature of native American religion" espoused in movies like Avatar, where everyone who dies becomes part of an amorphous entity of "the ancestors" who are collectively responsible for everything that happens. But they don't moralize at people; they just... make the tides and stuff.)
If something bad happens to you in a non-moralizing religion, it's not because a God decided you were disobeying their creed, but rather because a God was doing stuff and you just happened to get trampled underfoot without their notice.
A plausible theory of such non-moralizing religions (and their characters) is that they're the memetic byproduct of "mythology in the mode of Calvin's Dad"—i.e. parents giving off-the-cuff answers to children who keep asking why, when there is no real why as far as they know. (For example: "why does that star bob up and down around the horizon?" → "That star is a lady named Ishtar, changing her mood between loving the world and warring with it.")
The best of those explanations stick, and are remembered and passed along; and then, after the characters are created in the form of explanations for phenomena, then they get woven into the creation narrative, by the same sort of collective storytelling process that produced the Aurthurian mythos. Bam: a religion. We have enough examples of this happening even during recorded history (and even continuing to happen in the modern day) that it seems like a pretty solid explanation.
Moralizing gods have a different creation process that we don't understand nearly as well, though the cited study is a step in the right direction.
(My own personal hypothesis is that such religions are formed by elders who want to create evocative, memorable characters to use in parables to enculturate the youth (i.e. to get them to stop being such little assholes.) They start off as basically the equivalent of the "Donny Don't" storybook from The Simpsons—simplistic paragons of virtue or vice—and then elements are added to the narrative through repetitions of telling [i.e. practice], and mutation during retelling [i.e. children growing up, having their own kids, and realizing the usefulness of the parable-as-social-more-capsule], until the story is compelling enough to take off on its own without the need for goal-directed retelling. And one of the key elements to making such a narrative compelling is the addition of punishment of the paragon-of-vice by a character who isn't virtuous themselves, but rather is neutral and high-status, being able to serve as a "judge"; a character that can "stand above" the whole civilization, such that nobody can tell that character what to do, but where that character can tell everyone else what to do, whether by fear or respect. A paragon of authority. A God.)
Why is it not self evident that cooperation is driven by self interested paired with intelligence? Animal A sees animal B in its territory. This tends to result in an immediate battle where both animals expose themselves to risk of death. Higher intelligence would instead tell you 'hey if I cooperate with this guy, not only can we both keep all this land - but we can go take over the land owned by animal C'.
There's plenty of evidence of this happening in animals with greater levels of intelligence. For instance chimps tend to join up into groups and then go engage in war with other groups. But they, in turn, lack that intelligence to go even one step further. The same logic of '2 individuals can go kill 1 individual and take their land' leads to 'two groups can join up to go kill one group and take their land.' Presumably some outlier will eventually manage to start realizing this at which point you're well on your way to to the evolution of another highly intelligent species spreading themselves all over the place.
I also think there's even evidence of this in society today. When social order collapses, for instance due to a natural disaster, we turn to some pretty nasty behavior quite quickly even when it's in no way necessary. This seems to suggest cooperation, in and of itself, is not innate - but rather something else drives that cooperation. We can even see it in higher order systems. Corruption is a regular part of all societies at all levels. And that is people simply determining that they can get more by working outside the interests of society than working within it. It's quite easy to see how the same intelligence can drive both corruption and cooperation. Once again, self interest driven by intelligence enabling an accurate assessment of probable outcomes. Do I work just for myself, or do I work with this guy?
---
One argument against this are simple creatures that possess no semblance of higher order thinking, yet cooperate. Examples would include most types of ants. But I think there are too many differences to start considering these sorts of species. They tend to be extremely short lived and have extremely minimal needs relative to the quantity of resources out there -- that's unlike anything we can imagine. Imagine if a single banana could feed thousands of chimps for days. That rather reduces any motivation or incentive for competition. Well, except for when your thousands of other familiars run into a group of thousands of unfamiliars - which does happen and can lead to 'ant wars'.
Because "self-evident" doesn't hold up to analysis. Just because you see a pattern, and you decide that it's obvious and common sense and everyone would recognise it, doesn't mean you're right in any of those assertions.
Without formulating hypotheses and then proving (i.e. testing) those hypotheses, your assertions are opinions; good for the pub, questionable-at-best for basing arguments on.
Especially when dealing with "common sense": don't believe yourself. The moment you go "but it's obvious that", actually prove it to yourself. And if you can't (and very often, we can't) then by all means keep thinking it _might_ be true, but that you have no true basis to believe that and you can't use it to back up arguments.
You surely appreciate that I absolutely could formulate various toy experiments to prove this hypothesis. And I'd of course also concede that you could similarly form toy experiments to disprove it.
This is ultimately the problem with social science. You can prove, or disprove, whatever you want with anything short of non-toy experiments. But non-toy experiments tend to be either impossible or unethical to execute. And even if we set ethics aside, there are so many confounding variables that even these unethical experiments don't necessarily resolve the question. E.g. - would isolated individuals completely separated from humanity since birth behave differently in a favorable climate, unfavorable climate, in areas with rich natural resources, or with minimal resources, etc. And then you even get into genetic questions on top of it. Ultimately, you're not really going to be able to prove anything in these fields. Consider that > 60% of psychology studies in well regarded journals cannot be successfully replicated. And psychology is more of a 'personal' science. Introduce a social scale to these studies and it's all just a grand demonstration of pseudo-scientific bias confirmation.
> When social order collapses, for instance due to a natural disaster, we turn to some pretty nasty behavior quite quickly even when it's in no way necessary.
Only in misanthropist fiction and television shows. When social order collapses in real life previously equal people organize, cooperate, and protect the weak.
That being said, if there's some extreme social stratification in the group already, and there was previously a wider authority explicitly restraining the more powerful group from completely enslaving the weak group, of course that would change if the wider authority were eliminated.
During the flooding of New Orleans, the footage exemplifying "nasty behavior" broadcast on television were generally starving people looting grocery stores, cold, wet people foraging for clothing, blankets and bedding, or at the worst (and rarest) impoverished people looking for things they might be able to sell. The really scary thing was how police, both alone and as hastily-assembled white citizens' councils, started looting the town and threatening and killing flood survivors with impunity, and how the media cheered them on with a narrative of "nasty behavior."
Almost the entire source of nasty behavior in the case of that disaster was from people who were completely secure - the New Orleans police, police in nearby towns, and the national media.
> Civil disturbances in post-Hurricane Katrina were consistent with all existing research on disaster sociology, which concludes that “[post-disaster] widespread looting [is] a myth”,[51] and were vastly overstated by the media, ultimately fueling a climate of suspicion and paranoia which greatly hampered rescue efforts and further worsened the conditions of the survivors.[52]
> Some initial reports of mass chaos, particularly in stories about the Superdome, were later found to be exaggerated or rumor.[53] In the Superdome for example, the New Orleans sex crimes unit investigated every report of rape or atrocity and found only two verifiable incidents, both of sexual assault. The department head told reporters, "I think it was an urban myth. Any time you put 25,000 people under one roof, with no running water, no electricity and no information, stories get told." Based on these reports, government officials expected hundreds of dead to be found in the Superdome, but instead found only six dead: four natural deaths, one drug overdose, and one suicide.[43][54] In a case of reported sniper fire, the "sniper" turned out to be the relief valve of a gas tank popping every few minutes.[53]
I'm not referring to any incident in particular. This is a regular pattern. One problem is that conclusive data collection is lacking and suffers from severe underreporting. For instance NPR was able to find more victims of sexual violence following Katrina than the entirety of the officially reported figures. [1] But there have been some efforts at collating what studies do exist. Citing a study that carried out an overview of available evidence (across various disasters - this is not unique to Katrina) [2] :
"The literature review also highlight that there are only a few studies, which focus on the relation between being exposed to a natural disaster and the rates of interpersonal violence.[16] The results of these studies reveal that being exposed to natural disasters such as tsunami, hurricane, earthquake, and flood increases the violence against women and girls, e.g., rape and sexual abuse,[39,40,41] intimate partner violence,[32,42,43,44] child PTSD,[45] child abuse,[46,47] and inflicted traumatic brain injury.[48]"
Significant increases in nasty behavior is not mutually exclusive with other people aiming to work together. And is in fact rather significant evidence of the stated hypothesis demonstrating the varying calculus between groups. Its people all simply doing what they feel is in their best interest. And cooperation, often for personal protection, emerges directly from this.
---
As one aside, I completely and absolutely respect that you cited your comments. Not enough people do this, others don't even concern themselves with evidence. But one point I would make is that Wiki tends to have issues with impartiality on issues that have political undertones, are relatively recent, or deal with social issues. And in this case you hit 3 quite hard. This makes Wiki a particularly unreliable source for topics such as this. As one very overt example, the Wiki page on effects of Katrina repeatedly implies official figures on rape incidents are accurate and representative of what actually happened. That is in spite of the fact that the same official behind said reporting acknowledging, "I admit that rapes are underreported. I know more sexual assaults took place."
> “Our results suggest that collective identities are more important to facilitate cooperation in societies than religious beliefs,” says Harvey Whitehouse.
Then it's pretty awkward that the West is engaged in a long program to critique and dismantle its collective identity.
The West isn't dismantling its collective identity. It is refactoring it. Key elements of what construe western identity are restated in more defensible, universal terms through the process of critique.
Human dignity is a stronger foundation for our system of thought than a few statements from the first council of Nicaea or some other arbitrary religious authority.
> The deconstruction of an American identity into fragmented identitarian politics by socialists
American identity has been broken down into “fragmented identitarian politics” on sectarian, ethnic, regional, and other bases longer than there has been a philosophy called “socialism”.
Pretty much as long as there's been an “America”, in fact.
No, it's done by Americans who don't fit the previous definition of "American identity". Picture the "ideal American" and see who that excludes; and you'll have an idea of who these people are.
Historically, those groups melted into a shared American identity — that’s one of the things the US is historically famous for. Irish, Italians, and the “not white enough” Europeans; Chinese, Japanese, and Asian groups; recent African migrants, etc.
Every one of these groups integrated and thrived over time, without stalling out the process, because they joined a shared American culture focused on merit and cultural fusion via open debate.
Starting with identitarian politics in the 60s that process has stalled, to the point it’s become non-functional in modern times, with antipodal ideas like “cultural appropriation”.
This coincides with the infiltration of academia by Soviets, and the rise of many social movements using Soviet psyop mantras.
“It was always this way!” is lazy rhetoric, and afactual: there’s been a change in American cultural values over the past 20, 50, and 100 years — and we should talk about that, because they’re changes that impact our well-being.
> Historically, those groups melted into a shared American identity
That list of originating cultures stands out to me for what it excludes: slaves and descendants thereof, native Americans and hispanics.
Perhaps that's what you're getting at, the American identity is risk-taking, ie those that would willingly leave everything behind for the chance at a better life. (Too bad for those who get left behind or the weak, if they're not part of our group they can just get trampled in the process.)
> Starting with identitarian politics in the 60s that process has stalled
Identitarian politics, such as the civil rights movement? Women's lib? All a Soviet plot to destroy America, obviously.
> Perhaps that's what you're getting at, the American identity is risk-taking, ie those that would willingly leave everything behind for the chance at a better life.
My point is that Hispanics or blacks in the 1960s-1980s weren’t any worse off than say, the Chinese during railroad construction or the Japanese during internment in material terms — whatever the historic record.
So it’s worth examining why some disadvantaged social groups integrated while others did not — or perhaps a better phrasing is “better integrated”.
Unless you’re racist, surely you must agree that the answer is cultural.
> Identitarian politics, such as the civil rights movement? Women's lib? All a Soviet plot to destroy America, obviously.
You’re being facetious.
Those movements had roots in genuine domestic issues, however, radical segments of each movement were supported by Soviets or their sympathizers — offering things like funding and advice on effective propaganda in order to sow civil dissent. These radical arms of the movements eventuall took over, and transformed genuine civil rights movements into regressive and harmful ones.
It’s similar to their actions in 2016, when they funded advertisements for radical positions on the issue of race, gender, immigration, etc.
That kind of psyop to destabilize a country by funding radicals within is older than the US, and has been practiced by the CIA as well.
None of that detracts from the fact that radical feminism, identitarian politics, and intersectionalism all emerged from Marxist philosophy partially sponsored by Soviets.
Researching the Soviet idea of “useful idiots” is interesting.
The argument that there was never an American identity is part of the conceptual attack against the American identity. However, the denial of the existence of the identity contradicts any complaints about people's non-admission into it. So which is it:
1. The American identity never even existed, so all complaints of exclusion are invalid.
2. The identity existed, but some were excluded and that was bad, so the identity exists and is good, and assimilation is good.
His idea is that the voices of “gods” in premodern societies were literal command hallucinations experienced by members of these societies, in the place of having self-identified conscious thought. In a way, he argues, premodern peoples could be considered to be “not conscious”, as they had no inner locus to consider or contradict the voices they heard. This contrasts with the “conscious” experience, in which we personally identify with a sort of homunculus inside our heads that is watching and judging its surroundings.
Jaynes argues that societal upheaval and complexity led to changes (maybe cultural, maybe biological) that caused the “voices” to disappear, becoming absorbed into a new “conscious” stream of self-identified thought. From the perspective of societies experiencing this change, their inner gods would have been thought to have gone silent, leading to greater identification with (and societal organizing need for) externalized, idolized god figures.