I think it’s important to think of consciousness as a spectrum. Having studied artificial life, the conclusion I came to is that something needs to be situated and have some degree of nervous system to be conscious. A rock is not concious, but a tree might be. A fly is less concious than a dog. A robot may be concious if it has enough sensors to react and significantly change it’s behavior due to processing (rather than simple bump or range sensors acting as limiters).
But really at the end of the day, it’s sort of a bullshit word people tend to use to imply that humans are somehow special beyond just having an insane number of neurons pound for pound.
Rock consciousness seems like a potentially interesting special case. If a rock perceives and reacts to its environment by being selected and crafted into a tool by a human might it then claim to be living in a hologram universe?
I don't think there's any need to invoke as "magical" an idea as consciousness being a "fundamental property of matter" to reach the conclusion that many more things than we think are conscious.
A 10,000 ft view of the human brain includes: memory, emotional state, sensory input, ability to monitor and control internal processes including memory and emotion, motor control, and a logical system to tie these together as a system that attempts to achieve internal goals guided by memory and emotional state. Other mammalian and animal brains generally are presumably very similar.
What I don't understand is the need to invoke a je ne sais quoi -- a soul, quantum mechanics, whatever pseudo-matter the article is talking about -- to explain consciousness. Why should the above processes themselves not be sufficient to give rise to what we experience as consciousness, as an emergent phenomenon? It's not in everything, but it's in everything that has more-or-less all the pieces above. We could even make a crude consciousness, with today's technology.
It's like if, in trying to understand the essence of language, after analyzing lexical and grammatical structure and successfully describing all the visible parts of a language, linguists just shrugged and said "well this is how English works but don't know what makes it a language... there must be something invisible that is key". Well no, it turns out it is just the parts we can see and describe that are sufficient. We can even make one by putting those parts together (Esperanto... despite some linguists who do -- bizarrely, to me -- claim it lacks a "je ne sais quoi"...) and it acts just like a real "language".
I guess what's tricky is, unlike language, there's no way to "inhabit" an artificial consciousness to try it out and see if it really is "real". Many humans don't believe animals are conscious despite overwhelming biological and behavioral similarities. What sort of test of an artificial consciousness would not be hand-waved away as merely a simulacrum? One might even say the only falsifiable definition of a conscious entity -- other humans included -- is whether it has similar enough brain-parts as yourself.
People can't stand the idea that they are just part of a casual chain with no more free agency than a pebble buffeted by waves on a beach so they cling to the idea that they are special.
Nah, the fun is that you get to be aware of what your body will do before others experience it. Being conscious is like being your body's own Patreon supporter.
> What I don't understand is the need to invoke a je ne sais quoi -- a soul, quantum mechanics, whatever pseudo-matter the article is talking about -- to explain consciousness. Why should the above processes themselves not be sufficient to give rise to what we experience as consciousness, as an emergent phenomenon?
Because consciousness, my experience of it at least, is a fundamentally different type of stuff to anything that constitutes a brain. It's certainly possible to arrange brain-stuff (or silicon chips) to enact motor control and logic. But the jump from that to qualia, to there being something that it is like to be that brain-stuff, is certainly nontrivial. The analogy that occurs to me is that you're assembling a really complex jigsaw puzzle and expecting it to fly.
I agree that it's basically impossible to test an artificial consciousness to distinguish it from a Philsophical Zombie... but to me that's a limitation with the scientific method, not with the concept of consciousness, given that "I am conscious" is the thing I am most certain of in the world (given my experience of it).
I'm a materialist and on balance I don't think there is anything supernatural about consciousness. But that's not an easy position to hold.
Nothing, but if there's one thing I've learned discussing this at length with philosophers and AI students it's that once you get down to it and all the argumentation is gone it will boil down to "I'm special because I want to be". In that sense, discussing these matters with most people is just not a productive use of your time. Unless perhaps you've never done it before and you want to find out what that's like :)
I guess the way I see it, given that we don't currently have a way to test for consciousness, inventing invisible "consciousness-stuff" is just a solution in search of a problem. We are supposing that it is necessary with no evidence.
> The analogy that occurs to me is that you're assembling a really complex jigsaw puzzle and expecting it to fly.
I would though. More precisely -- if I looked at a real airplane and modeled it with, say, balsa wood and a rubber-band propeller, even with no understanding of why airplanes fly, I would expect it to fly, and I think that would be reasonable. I don't think that I would think, because flight is so fundamentally different than what wood and rubber bands normally do, that a "real" airplane must be also composed of some invisible "flight-stuff" that I cannot harness for my model. Even if I say, neglected to angle the propeller blades or wings correctly, so it did not fly or even move on its first attempt, supposing a new branch of physics would not be my go-to explanation.
I do see your point though about consciousness being a qualia and philosophical zombies (thanks for introducing me to that term). It's supremely challenging to conceptualize who is the "me" who is experiencing consciousness.
I was ruminating on this on a long walk recently and it occurred to me that I'm probably not "only inside my brain", as you put it.
"You" aren't only inside your brain at all. Imagine that at every timestep, there's a single consciousness that experiences qualia all at once, across all matter - and the feeling of being "inside your brain" is just that the qualia happening there have a large number of causal relationships to your recent brain activity, your working memory, longer-term memories, inputs from your senses, etc - but at that same instant, "you" are also the person sitting across from you - there just isn't the same level of cross-chatter and so "you" can't experience both brains simultaneously.
Kind of like context switching in an OS or something. This would enable a panpsychic view where consciousness was field-like, taking on large values where matter is densely causally connected, such as in the brain, and low values where it isn't, such as in the empty space between two people.
Anyway, as Hicks said, "we are all one consciousness experiencing itself subjectively ... here's Tom with the weather."
I actually wrote a longer comment but deleted it because i didn’t think it was fully formed:
> Maybe it’s like an operating system context switch. I feel like I am in one mind because I have memories of what came before even microseconds ago. But how do I know that my memories were not just loaded right now for processing...I guess simulation theory stuff. We don’t really have any proof that we were actually in control of the moment that just happened a microsecond ago - that could all be an illusion.
Just a funny coincidence that on such a topic as a global consciousness we had such a similar idea ;)
Ha, that's a lovely coincidence, thanks for sharing.
FWIW I went down this train of thought while listening to an audiobook of Hofstadter's "I am a Strange Loop", which doesn't quite go down the same road but triggered the idea well enough in my own brain.
Because your brain is physically inside your body just like water is physically inside a bottle.
Speaking of sci-fi maybe in the future you will get a brain implant and someone could remotely control you or manipulate, transfer or observe your consciousness.
Btw notion of mind uploading is very feasible imo.
>But why is your experience tied to one brain and not another.
Because of the evolution. Life evolved from single-celled organism to multicellular organism building on top of individuals. Individuals which lived and cooperated in one community and one habitat evolved individually while living and reproducing together as one specie spreading genes to their offspring. There is no shared brain or shared consciousness in living beings, brain and consciousness are individual properties.
Something that is somewhat close to shared brain or shared consciousness is collective intelligence.
Because the above processes are too slow. Our society has a slow framework to move physical goods around and a layer above where we move information around at a much higher pace. That upper layer can be called a soul of our society.
Consciousness is a feature of neurological construct such as brain or a feature of any other construct that we yet don't know of.
Level of intelligence is directly proportional to reaching consciousness.
Further on to elaborate I think consciousness depends on intelligence; the more intelligent a living organism is there is higher probability it will reach a point of consciousness which nobody knows where and when exactly is.
If primitive living organisms show patterns of conscious like behavior it does not mean they have consciousness because they live by the rules that are encoded and heritable. Such individuals have no control or awareness over their behavior and are not aware of itself and its surroundings. They reproduce or replicate and are subject to the biological forces of natural selection, with the payoffs of the game(life) representing reproductive success (biological fitness). See:
I personally believe in the theory of panpsychism but it raises so many crazy questions.
What defines an object? If everything is just a sea of atoms where precisely is that boundary that says "this is a rock, and it as a whole is conscious". If I take a chip off that rock, do we now have two distinct conscious entities?
If I cold weld two huge pieces of steel together so they fuse at an atomic level, did I just turn two independent slabs of steel into one larger conscious entity? When does it become one entity? When the first atom between the two is fused together, or does it become "more" conscious as I continue to weld it together?
The rabbit hole goes deep with the concept of consciousness.
you might be interested in emergentism and constructivism as I believe they provide useful concrete answers to your questions. Panpsychism falls into the slippery question of defining what "consciousness" is; emergentism provides the frameworks of cybernetics and systems theory to approach many philosophical issues, and can be applied in a panpsychism-ish way by using the concept of self-organisation and adaptation to the environment as the scale for "consciousness".
constructivism says that our experience of reality is based in our perceptions and our labels for things are just useful mental models. An object is defined by how we use it - a chair is a thing you sit on, for example. The ship of theseus is whatever theseus uses to travel the river regardless of the planks it is made of.
I've not yet run into any internal contradictions in this worldview, which is the stick by which I measure sensemaking model efficacy.
Wrong ideas can often hold academic credibility. Not saying this particular idea is right or wrong, just that academic credibility may not mean much when it comes to matters of mind of which we may never really know much about.
A rock could very well be conscious, or not. Let us say that (by someone's definition) it is, what then? Someone else is not going to agree that it is.
Is there an objective definition? Can it be experimentally verified?
I think that the quality of consciousness is a good fit for brains. If a plant has consciousness it doesn't mean that much because there isn't anything like the brain to experience things. Although I have read that they may have very basic sensing structures.
Permutation City by Greg Egan - it's fiction, but honestly it will do as good a job as any textbook on the subject to force you to really think about your beliefs about the nature of consciousness.
I'm reading the Bicameral Mind book by Jaynes and regardless of the "mind-body problem solution" it posits it has a good overview on the problem of consciousness at the beginning of the book
We do NOT understand our own consciousness so it's idiotic to project what we don't even understand to inanimate objects.
This is superstition and anti-rationality. The right answer can always be "we don't know" but this is deciding "nope, we can just make shit up because it feels good!"
Basically taking civilization backwards by hundreds if not thousands of years!
In other news, academia is coming to accept that "consciousness" can have no objective definition. Users of outlying definitions that appear useful in their field thus have no need to invent a new word for their particular choice. In some cases the entire lack of any meaningful definition is the whole point of using the word.
Lack of a definition, or agreement on a definition, has never been seen to be an impediment to publication, and that is ultimately what matters.
Well the age of reason was nice while it lasted. "I think therefore I am," is rather outdated. We should adopt this new age paradigm of which Descartes could not hope to have discovered in his primitive time: "It is, therefore it thinks."
So consciousness is some kind of messy combination of matter, responding to inputs, to connections between particles. This feels wholly inadequate. My human consciousness is reduced when I get shot in the head but the particles are still there, so think about when my brain deteriorates, when I die. If you put my matter into a container all that matter is still there but I'm no longer able to have human consciousness (unless there is some kind of thing outside of physical laws, which I am not going to just jump to).
So this feels like a useless definition. Like the computational power of something is a product of it's mass. It's much more than that.
It's not useless, it's a valid approach to study the subject.
Consciousness here is downgraded to a very "simple form", to more fundamental components that all
matter possess in a way or another. So it does not mean that a rock is sentient like your mind is. It's more like saying, ie, that quantum entanglement could be a component of consciousness. A rock could interact with quantum entanglement, therefore it would be plausible that a rock could have a tremendously primitive form of consciousness.
You can compute things by moving rocks around and placing them just so. (or using an abacus, etc)
Therefore (if you accept that intelligence is computable) there could be an intelligent mind denoted by a sufficient number of rocks over a sufficient period of time, even if it took billions of years to have a thought.
Whether "naturally" placed or by some agent.
I think this is not original of me, but I forget the source.
If we can replicate the information processing in 1 minute of brain activity by rearranging and moving rocks in a specific way over a 10 year period, would that arrangement of rocks have subject conscious states, or not?
If yes, it's a good argument in support of panpsychism, because those conscious states would seem to be substrate-independent. And if not, why does the brain have subjective states but these rocks do not? What makes our wetware particularly special?
There's clearly something profound going on here, even if it's currently beyond our ability to articulate properly within a scientific framework.
Feedback. Our brains are circuits, rocks are not. No matter how hard you try, rocks are never going to rearrange themselves if left to sit. Self-organization is very common in nature, and I find it easy to believe that consciousness is an emergent process that necessarily encompasses some level of self-organization.
>rocks are never going to rearrange themselves if left to sit
I don't understand this statement. What are mountains? What is sand? What is a planet? What is an asteroid? Where do you think rocks came from in the first place? Over substantial periods of time, rocks are always being rearranged whether or not humans interfere.
Perhaps you are drawing a distinction between being rearranged and rearranging themselves, but I don't understand that distinction. If you think that humans rearrange their own brains, I think that is an impossible thing to even define. How can anything act on itself without an intermediary? What kind of evidence is there for it? Without energy input surely you'd agree that no brain can make itself function? So how is a brain different from a waterfall or rocks moved by a river?
Ok, maybe not useless. It feels so far away what I imagine human or a dog's "consciousness" that it is hard to see how to rate the jump up from an ant to mouse, dog, great apes, etc.
But really at the end of the day, it’s sort of a bullshit word people tend to use to imply that humans are somehow special beyond just having an insane number of neurons pound for pound.