Hacker News new | past | comments | ask | show | jobs | submit login
Self-Awareness Might Not Have Evolved to Benefit the Self After All (sciencealert.com)
62 points by indigoabstract 43 days ago | hide | past | favorite | 67 comments



> Rather than helping individuals survive, it evolved to help us broadcast our experienced ideas and feelings into the wider world. And this might benefit the survival and wellbeing of the wider species.

Not another "survival of the species" confusion. Social cohesion helps the survival of the individual genes. If anything, our society is actually rather precarious for species-level survival due to its self-destructive nature. Genes are not selected based on species-level survival.


> Genes are not selected based on species-level survival.

Taking the selfish-gene interpretation further, it could be argued that they're not selected based on individual-level survival either, unless "individual" means the genes themselves instead of the carrier. As long as the genes get to reproduce, the individual can be cast off like old shoes.

I can also imagine there could be a mechanism for "survival of the species" as emergent behavior from the collective interactions of individual selfishness. Looking at human society these days, it doesn't seem that way though.


> Looking at human society these days, it doesn't seem that way though.

It seems extremely evident to me. Humans expanded from a small group of primates limited to a specific region of east Africa to being globally endemic, with a population of billions, and modifying diverse ecosystems to optimize for our own survival, all within the span of a few hundred thousand years.


True. It feels like humans are driven by individual selfishness, so much that it's threatening the survival of the species. But on a long timeline of centuries and millenia, somehow "collective selfishness" is working out well.

In terms of population, the past century has been a grand success, far above any local damage caused by immense selfishness of an individual or small group. Who knows, maybe such seemingly destructive tendencies are even contributing to further the collective.


> It feels like humans are driven by individual selfishness, so much that it's threatening the survival of the species.

I'm not sure I'm following. Individual selfishness seems to be driving rapid expansion and propagation of the human species -- where do you see any threat?

> But on a long timeline of centuries and millenia, somehow "collective selfishness" is working out well.

What do you mean by "collective selfishness?" I'm not sure I'm aware of anything that exists that matches how I am interpreting that phrase.

> Who knows, maybe such seemingly destructive tendencies are even contributing to further the collective.

What destructive tendencies are you referring to?



Kin selection is not the same thing as group selection as in "survival of the species". Kin-selection theories might "work" due to the emergent behaviour of social systems but "survival of the species" is something else altogether and unrelated.


If you actually follow the link, you'll see it's not only about kin selection, but also about group selection, and species selection would be a form of that.


I think there's some kind of rhetorical confusion in taking the fact that we have trouble explaining consciousness to ourselves on a philosophical/metaphysical level, as a sign that it also requires a very special explanation on the biological side. As far as evolution is concerned we can just take things at face value: I am conscious of seeing a tree because I'm looking at it and the image gets processed by my brain, and then I think about it etc... Obviously being aware of things is useful for an organism, and any philosophical problems beyond that are irrelevant to evolution.

Sure, this may result in the conclusion that consciousness is just an evolutionary by-product of other, useful things, but when a by-product is posed as a biological problem it's usually from the angle of wasting resources. If consciousness is just a logical result of plain awareness of sensory stimul and thought there are no further resources wasted on it beyond those needed for awareness and thought. The philosophical problem of how to understand consciousness is orthogonal.


I'd wager that an insect reacting and avoiding a tree without being aware that it is a tree is possible and advantageous, but being aware of trees brings a planning advantage that simple reactions can't provide.

IIRC the portia spider can plan a route to a prey, and follow it even if the prey is not in plain sight all the time, which allows it to be a moore successful predator. IMO that implies some level awareness of its environment.


The idea that subjective consciousness has to casual power in decision-making seems eminently silly to me. Sure, it's fairly obvious it's not the only or even always primary thing involved in how people decide what to do, but it seems absurd to me to say that, for example, someone's inner train of thought comparing and contrasting two choices at length has nothing to do with which choice they actually make.


Most of the time we are on autopilot and our brain retconns the reason for an action, decision, or emotion after the fact.

But it seems clear that at times we consciously steer our autopilot in a new direction by choice. Some number of repetitions and that becomes the new autopilot behavior.

Actual conscious thought and control is too metabolically expensive and slow to use much of the time but it has its uses.


> Actual conscious thought and control is too metabolically expensive and slow to use much of the time but it has its uses.

I was about to write some long winded reply to why this doesn't make sense . Is this such a crazy thought? Do people really feel like they are in control of themselves? After spending a lot of time thinking about this, I cannot understand how I make decisions.

I literally cannot tell you what I am going to do next. I can say that I am going to end this reply. But I have no idea when or why I am going to stop here.


Same, and this is why I think I'm also not much more than an LLM or a next token predictor. All of my actions could be continuous domino pieces. It all seems just to be falling naturally. I'm not trying to be humble or anything. It just seems obvious how the life is. All I do could be autopiloted, I do have an unexplained special sense of consciousness, but I'm not actually seeing a strong need for it. I happen to be or feel like I am and that I'm somehow special, but it seems like I could also operate at the same or even better efficiency without that.


It's absurd to believe that consciousness is in any way responsible for decision making. There's no plausible scientific model that would allow for that to take place without breaking causality. Instead, we have strong evidence that points to consciousness being an epiphenomenon of brain activity, completely detached from action.


What a dangerous thought, so mathematical learning is predetermined what biological factors? Is not the conservation of the body’s metabolism in the utility of communication served more physically? The ability to fast and ascetic rituals are self-evident demonstration of human motor neural control away from baseline biological processes. Unless you can please explain a hypothesis for the development of motor control habits, e.g. for healthy socializing? I suppose better group balance by controlling the body’s autonomic nerves is an adaptive response, but who is the one to get the chain of being started? To the conscious activity the answer is: through random genetic assortment?


What about this conclusion then:

It's absurd to believe quantum physics can in any way be the right theory of the world at a small scale.

Why?

> There's no plausible scientific model that would allow for that to take place without breaking causality.

Indefinite causal structures show up quantum theory [1].

All hinges on what is meant by the opaque "breaking causality".

[1] e.g. https://www.nature.com/articles/nphys2930


At the very least, consciousness causes enough impact on reality to result in the concept of “consciousness” to make it into dictionaries. If it was truly an epiphenomenon, such real-world impacts would be acausal.


From what I read it's actually the opposite. Studies seem to show that while our attention makes us feel in charge and at the helm, the decisions and the thoughts that arise happen way before our ability to influence them consciously. It appears that our consciousness is mostly a rationalization layer after the fact.

That's why there is a problem explaining consciousness in the first place. The more we dig the more elusive it is


As far as I know, those studies all look at minor "decisions", such as pressing the left or the right button after a sound is heard. They find that they can predict which button will be pressed earlier than the conscious decision to pick one was experienced.

However, there is no actual "decision" to be made there. It's a random choice, and it's not that random choices are not conscious decisions.

It's much harder to devise an experiment that looks at actual, meaningful, decisions. And I'm not talking about something extreme like "do I love this person" or whatever, just a real decision, like the ones you might make in an RPG ("do I help or kill this NPC").


I'm puzzled by the downvotes. Am I saying something obscene or against the rules of polite discourse? I mean, I could be wrong sure but I don't think downvotes are for disagreeing with what somebody says but for punishing behavior that lowers the standards of the forum. Am I really doing that with this comment? Do I need to cite all the articles and books I read to form this opinion or else I cannot state an opinion?


It may sound absurd, but that doesn't make it so. There are good reasons to believe that at least in many cases (some of them being complex decisions), the conscious thought process is either ineffective or counterproductive.

Obligatory reference to Peter Watts' Blindsight, as well.


> The story follows a crew of astronauts sent to investigate a trans-Neptunian comet that has been found to be transmitting an unidentified radio signal, followed by their subsequent first contact. The novel explores themes of identity, consciousness, free will, artificial intelligence, neurology, and game theory as well as evolution and biology.

> Blindsight is available online under a Creative Commons license.

Here's the novel:

https://www.rifters.com/real/Blindsight.htm (HTML)

https://www.rifters.com/real/shorts/PeterWatts_Blindsight.pd... (PDF)

http://www.rifters.com/real/shorts/PeterWatts_Blindsight-v1.... (ePUB)


And the Echopraxia sequel too. Anyway, consciousness shouldn't be so bad in the real world of we as a species were successful so far despite of it. Of course with the hindsight of the next million years some future species could say that we were doomed by our consciousness, but we can't know that right now.


> someone's inner train of thought comparing and contrasting two choices at length has nothing to do with which choice they actually make.

Even that is illusory most of the time. It takes deliberate effort and method to limit bias and personal preference.


Having personal preference doesn't make a decision illusory. I have a preference for vanilla ice cream, doesn't mean I can't legitimately consider or even choose chocolate.


That’s one of the points made in the article - intuitively it feels mad.

But there are experiments out there that have shows ‘conscious decisions’ about an action coming after the action rather than before.


What sort of 'actions' do those studies pertain to, though? Simple stimulus-response exercises? If so, I don't think anyone is denying the existence of reflexive or conditioned responses to external stimuli, but that in itself does not disprove that we retain the capacity to insert conscious reasoning between stimulus and response.

It seems to be a common pattern when encountering novel situations to analyze them consciously, make deliberate choices prior to taking action, but then to recall those prior decisions in future situations that involve equivalent inputs, repeating the same actions over time, to the point that we effectively "store" our past decisions as conditioned responses that the subconscious can apply directly. Things become "second nature" over time, and develop into habituated actions, but that doesn't mean that there was no conscious thought involved in the experiences that formed them in the first place.


"to casual" -> "causal", right?


That was supposed to be "no causal", but I only caught the typo after it was too late to edit it.


Yeah I’m definitely inclined to consider consciousness as more of a mechanism for feedback at the end of the chain, rather than the primary way of generating decisions.


The smartass answer is that evolution can't do things for reasons, it is just a matter of side-effects and one central tautology: that which grows and survives you wind up with more of.

Self-awareness is a spectrum instead of a binary anyway.


While technically correct, i think everyone understands that when people ask "why evolution decided to do x" they don't mean it literally but instead are asking "why was X a sucessful evolutionary strategy such that the trait was conserved or even optimized through the generations into what we see today"


I've always felt it was pretty self-evident that the purpose of sapience in mammals is social first. In order to understand social standing and dynamics, a theory of mind is necessary. And, this theory of mind primarily benefits social situations, such as raising children and living in packs / herds / tribes / etc. I also believe this is why you sort of "observe" yourself; your observation of yourself almost happens in the 3rd person, and even when alone you think of yourself as observable.


The disconnect seems to be that there is a social mechanism to both recognize self awareness and then select for it, the way there might be for other socially shared features, like beauty, for example.

Likewise if you consider the outcome differences between two people who differ in attractiveness versus differing in self awareness, then those with more of the latter trait seem more likely to succeed, even if they're completely isolated from society.

It could be that it won out as an adaptation from an evolutionary perspective because it has more than one category of benefit but to say that it evolved strictly for the social benefit does not seem serious.


I have the feeling that self-awareness is inherent in every system.

When a flower gets more sunlight from one side than from the other, it starts to turn to that side. Is that different from a person who walks somewhere because they like to? Most people I ask say "Well, the human can say 'I'". But a computer can also say "I". Where is the threshold to call it "self-awareness"?


What if the bimetal of a mechanical thermostat closes a circuit to turn on the heating due to a temperature change?

In other words, would you apply this to any system, or only biological ones. Is a sunflower turning to the sunlight as "programmed" as the thermostat?


That can also be said about other social groups. At some point, entities like a nation, a country, a social class and others develop self-awareness.

Although, there is a difference; while an organism is the one saying "I" and not the cells or tissues. In a social group it's an individual (or group of them) that says "We".


> an organism is the one saying "I" and not the cells or tissues

An organism is a colony of cells and in that colony only some cells are responsible for thinking / communication.


Panpsychism might interest you.

On the flip side (or maybe not) there's epiphenomenalism and the narrative theory of the self.

Personally I think there's more evidence for the view that consciousness is a post hoc justification for unconscious actions than there is for it's "consciousness all the way down".


Is there a difference between panpsychism and the idea that consciousness is just a post hoc justification?

To me they seem the same. When you say "3 stones make a trio of stones" you don't mean that a "trio" existed somewhere outside of the stones and now moved into the stones, do you? You just gave a name to this specific formation of a stone, another stone and yet another stone.


> Is there a difference between panpsychism and the idea that consciousness is just a post hoc justification?

I don't think they're necessarily inconsistent (hence "or maybe not"). But I do think that one person is unlikely to believe both of them.


More over, trees have chemical signalling mechanisms amongst related species in response to environmental stimuli. So a tree broadcasts to surrounding members of its species what's going on.


I'm not sure that it makes sense to regard any instance of an entity engaging in stimulus response behavior as being equivalent to self-awareness -- in fact, other comments in this thread include arguments that because humans often respond reflexively to external stimuli, we ourselves are not always self-aware in the process of our own decision-making.


The answer, my friend. Lies on the other side of the rainbow. It's in he chafed lips of a unicorn. And the swollen ball-sack of a Leprechaun.


I'm not sure I understand why what you describe isn't just the behaviour of the system? Even a mechanical system (take two connected cogs on separate shafts, turn one..) you could say something similar about.

Surely 'self-awareness' is intended to be about reasoning that the output shaft turned because [...], that the turn was made towards sunlight because [...]. I think this is what you're getting at with the human being able to say 'I', but I don't think you can gloss over that in that case, because it's the whole point isn't it?


Yes, I would say every system has self-awareness.

You say "reasoning" is needed. Ok, but how is that defined? Say you "ask" the cog why it turned by rattling it. Then it will rattle the other cog and, therefore, point you towards it. And you might say "Oh, I understand. You turned because the other cog pushed you". You might say that is just a mechanical mechanism. But if you look at human behavior close enough, it's just a mechanical mechanism too.


I think "self-awareness" as we are discussing it here does not just mean "causal relationship between stimulus and response". It specifically refers to a singular consciousness engaging in thought that translates into intentionality -- this is something that we only directly experience within our own minds, and it doesn't make sense to speculatively apply it to things that are observably very different from ourselves.


But we have the language to know we're capable of and do that introspection. Isn't it fair to say of the cogs that, lacking the ability to talk about 'I' or whatever, some other way to measure it, we don't know if they possess that self-awareness or not?


What exactly does it mean that we "have the language to know"? And how do you prove it? How does the cog's reply in my last comment not match your definition and proof?

In my experience, when you try to define and prove it in a way that is not fuzzy, you come up with a definition that also matches the cog.


> But we have argued that consciousness may have evolved to facilitate key social adaptive functions. Rather than helping individuals survive, it evolved to help us broadcast our experienced ideas and feelings into the wider world.

Or it was a side-effect of creating empathy (mirror neurons,) which is useful to not short-sightedly do eye-for-an-eye, thereby killing off your social context every Thursday. Perhaps this allowed us to become more aggressively competitive/greedy, while still preserving family social cohesion.

If you (1) can feel empathy but (2) it still feels a little bit different from your own emotions, it doesn't seem far-fetched that it would lead to self awareness. The (2) could happen simply because the eyes' bandwidth is much higher than story telling is, making your own experiences feel more nuanced.


The reasoning in this article feels very human-centric. Most animals are likely conscious and self-aware to some extent. Yet, lots of species are loners, not what we would consider social. How would you explain their self-awareness?

I also don't agree with trying to find a single reason for self-awareness. Lots of evolutionary adaptations are beneficial in more than one way. Our bloodstream delivers nutrients to cells. Our bloodstream is also distributes immune cells throughout our body. Our bloodstream also clots when we have a cut and carries cells to help repair damaged tissue. Why can't self-awareness be beneficial in multiple ways as well? Why can't it have multiple reasons for evolving?


Hmm this seems very confused, and it's arguable whether more consciousness / self awareness has any net benefit for social cohesion.

It's very useful from an evolutionary standpoint to have an accurate internal model 'me', which you can project into hypothetical scenarios and learn from them, reducing risk and the amount of trial and error (and so increasing survival). I like the Hofstadter view that consciousness arises from the 'strange loopiness' of this self-awareness of the 'I' model.

Any social benefits are secondary, rather than the driving force, and are a result of this model and differentiating 'me' from 'you'.


What a confusing article.

I think its trying to say that self-awareness developed for the purpose of social cohesion, to help communicate emotions with the group.

Fair enough, but i thought that had already been a popular theory for a very long time. Am i incorrect about that or do i just misunderstand what is new about their theory?


There was a discussion the other day about how we should explain "simple" things very obviously, because we just assume everyone knows what we know.

I didn't know about this theory and I'm glad the writer made an article about it.


> we just assume everyone knows what we know.

Yes, the "Curse of knowledge" cognitive bias can cause the intelligent to be unintelligible.


Many years ago, a professor said to us something like "I am uncertain that we can use the system to explain the system."

(Meaning that we may not be smart enough to be able to explain why and how we are smart enough to explain our cognition, if we can explain it at all)


The concept of a Universal Turing Machine (UTM) suggests that, given enough time and memory, it can compute anything that is computable. By extension, if human cognition is computable, a sufficiently advanced computational system could, in theory, model and explain it.

This doesn't necessarily mean we'll easily achieve this understanding, as the computation required might be extremely complex and time-consuming. But it does suggest that the explanation of cognition isn't fundamentally beyond reach, even if we're using the system to explain itself.



Darwinism was revolutionary for its time but we are kind of moving beyond it already.


with what


QM and GR have informed a lot of them but there are others from e.g. computer science.


How do you make a system that can model its own state based on sensory input without giving it subjective self awareness?


It’s called adaptive control theory [0] and has already been in use for decades in robotics, aviation, and so on. Whether you want to start calling solving constraint equations “self-awareness” is another story.

The short description: you have an initial idea of the state (an estimate) and you take sensory input (literally from sensors like thermometers, cameras, potentiometers, etc.) and solve constraint equations that both refine the estimate of the system as well as tell the system what to do to achieve some objective. The objective could be something like prevention of an inverted pendulum attached to an electric motor from falling over.

[0]: https://en.m.wikipedia.org/wiki/Adaptive_control


How is that not a form of subjective self awareness?


Sounds very Peter Watts-esque.


Mhh, if I have evolved to a point where my experienced feelings and ideas are of such complexity that animal cues no longer suffice to broadcast them to the group self-awareness might have involved in tandem with the more complex nogging and not as a downstream feature.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: