So the question is, this simulated brain, which is behaving very similarly indeed to a biological brain: does it have consciousness? Does it feel?
On the one hand, you might argue of course not, it’s just a computer; computers “obviously” don’t have feelings. It’s just bunch of bits. A precise collection of patterns of electrical charges evolving in time according to rigid rules.
On the other hand, is our own biological brain any different? We don’t think of a few cells as having consciousness, but somehow the broader collection does. Can the biological substrate of the complex arrangements really matter?
Maybe consciousness is like a “soul”, it’s a completely untestable, untouchable “ghost” that can inhabit anything. So maybe everything really does feel, in its own way? Maybe my computer already has some form of consciousness...
> So the question is, this simulated brain, which is behaving very similarly indeed to a biological brain: does it have consciousness? Does it feel?
If you answer "yes" to this question, what happens if you split the computations of that simulated brain over a million people equipped with pen and paper? Where is its consciousness happening now?
I have yet to hear a satisfying answer from anyone who believes that an AGI will automatically be conscious. But then again, I have yet to hear a satisfying explanation of consciousness from anyone...
Consider performing the algorithm of Alpha Go Zero through a million people equipped with pen and paper. One can hardly argue against this being possible in principle, but what is playing the game?
This is not proof that AGI will automatically be conscious, or even just that some sort of conscious machine intelligence is possible, it is merely intended to suggest that Searle's line of argument does not show its impossibility.
So I simplified the problem, by replacing the ability to understand Chinese with the ability to see in colour: http://www.fmjlang.co.uk/blog/ChineseRoom.html
While the simplified system is able to distinguish colours, how does it have the first person experience of seeing in colour?
So yes, the guy in your room can experience colors by using the sheets (though in a different and much less capable way than most of us, thus him not wanting to equate this with our common understanding of experiencing color), and is color blind without. And I kind of get deaf when plugging my ears, though that experience is vastly different from someone who never had hearing to begin with.
That doesn't answer what it is like "seeing red" and "feeling blue". I don't know what it's like for you. I can only look at your behavior, see it being similar to mine under the same circumstances and thus assume your internal state to be similar. Until your behavior tells me it being different... like someone blind running into a wall or someone approaching a chasm with no signs of fear.
ps: had to google "feel blue", as it meaning "depressed" isn't really a thing in my native language. Translating (color based) idioms is actually kind of telling... it seems only possible directly if both cultures had shared aspects / experiences.
So yeah, while conflating consciousness with intelligence doesn't make a lot of sense to me. But I'm not sure equating it with perception is getting us anywhere better (hope idiom didn't get lost in translation).
The person in my room (an achromat) can't experience colours at all, and they're able to tell you that. Their eyes have no functioning cones. They see everything in monochrome, so to them, watching a colour movie is how watching a black and white one is to us. Under low light conditions our cones are inactive and we see only in monochrome using our rods, just like achromats do. So people with normal colour vision can experience what it's like to be achromatic, but achromats can't even imagine what colours are like.
In the room, the person can only tell what colours the papers are by observing how dark they look when overlaid with the coloured filters (whose colours they know).
The metaphorical use of colours in the language isn't really important, and as you point out is dependent on culture.
There is a sense in which the system has an experience (necessarily first-'person'), but it is nothing like what we experience on seeing color. It is no more or less an experience than an ordinary color meter would have, and it lacks any awareness or concept of having an experience, no memory of it, no recognition of a similarity to previous experiences or of it being unlike any previous experience, no ability to imagine such experiences, no expectation of future experiences, and no emotional reaction to it ('my favorite color!') It is an experience only in the weakest sense possible.
I prefer a simpler analogy: what step in a sorting algorithm actually does the sorting? The question is actually nonsense, because every step is necessary, and removing any single step will break the sorting property for some inputs.
When you have all the relevant information, it's clear that the original question itself is ill-posed. We lack some of that information when it comes to.consciousness, hence why the Chinese Room is convincing to some.
You are right that we don't have a good (any?) definition of consciousness therefore it is difficult to properly ask if Chinese room has it.
After reading and talking with people it seems that consciousness is something one can only claim to have (like a subjective feeling), but can't really manifest (I can only assume you are conscious because I believe I am, and you behave similarly enough).
But can a bigger system have consciousness when individual parts of it are already conscious? I don't see why this should necessary follow. I'm sure there are somewhat similar and interesting emergent properties, but we'd probably call it something different. Probably closer to the kind of group identities we already have with lots of humans interacting. Though obviously we'll know what it feels like, just like none of my neurons will ever feel like i do.
If you had a physics model it could describe it perfectly but it would likely involve a very large group of matricies and linear algebra constraints. You technically could work through every equation given a large set but examining the watch yourself would likely be quicker, if not learning to be a sufficiently good watchmaker.
However if you assemble a computer model to handle every component's interaction individuallt and a physics engine to do number crunching you could get a working mathematical model of the clock.
Bob: Of course.
Alice: But what if you scaled up all the neurons in the brain a factor of 2? Would it still be conscious?
Bob: Sure, if it's the same system running the same way. There's no reason to expect it to matter.
Alice: Then if it was a factor 4 larger?
Bob: Absolutely. Human brains are sized as they are as a matter of evolutionary coincidence. Bird neurons are much more compact than ours, and still manage much of the same work.
Alice: Right, but if every neuron also contained chloroplasts?
Bob: Which did what? There isn't much light there.
Alice: Nothing, they're completely ancillary.
Bob: Then yes, evidently it's still the same system... at what point does this become a hard question?
Alice: Just hold on a few orders of magnitude...
If you scale the system but keep one end tied down it isn't the same system.
My reason for comment related purely to the looseness of the conversation, w.r.t. the word "scale", which, it seems I had misunderstood to mean an affine transform.
In light of this comment, I now see you were talking about some growth function or change in topology/structure.
Thanks for the clarification.
The thing with information is that it is not as localized as other physical quantities like spin, mass, or even energy. Conversely if the information were localized we would call it lack of thereof. Information between systems mean correlation between them. Same goes with information processing where changes in one system corresponds to changes in the other. It happens in between.
Also, you run into the problem of time slices. One step in a simulation of my brain is not conscious. There must be some self-reference going on in there that give us this illusion over time that I doubt a pen and paper model could capture.
I'm of the school of thought that a turing machine, let alone pen and paper, might not even be capable of this simulation. For example, it's possible that there are quantum effects going on in biological neurons that touring machines are not equipped to simulate.
Of course, we can not yet rule out the possibility that there exist more powerful models of computation than Turing machines. We just know quantum effects don't lead there.
Our brain organization can be described by its various information cascades. But if conscious is implementing the right kind of information cascade, then a brain simulated by a million people would be conscious. The "where" of conscious is the same question as the "where" of informational relationships. But this isn't a well-formed question. There is no one location where some sequence of bits and its proper interpretation is stored. It's distributed among the components involved in the information cascade. The "where" of consciousness is the same.
So it's not surprising if we struggle to answer "Can machines be X?", when X is undefined.
Do you think beings in this second run of the universe are conscious? Or do you think that somehow something has been left out of the simulation?
So if there is conscious beings in the universe, it'll be in the simulation. You hold that you're conscious, so then I'd hope you'd agree the simulation has at least one conscious being in it.
I don't know why some folks consciousness test only seems to work for themselves. I've been convinced by the simple argument by induction: I am an example of an animal. I have consciousness, so others likely do too.
I haven't really found a compelling rational argument to fully reject solipsism.
Though I agree that looking for yourself in a perfect simulation is a clever hack. :-)
Here's Dennett on this very subject, about 20 years ago. In Consciousness Explained he imagines what a Turing Test conversation between a human 'judge' and a 'Chinese Room' might look like:
Judge: Did you hear about the Irishman who found a magic lamp? When he rubbed it a genie appeared and granted him three wishes. “I’ll have a pint of Guiness!” the Irishman replied and immediately it appeared. The Irishman eagerly set to sipping and then gulping, but the level of Guiness in the glass was always magically restored. After a while the genie became impatient. “Well, what about your second wish?” he asked. Replied the Irishman between gulps, “Oh well, I guess I’ll have another one of these.”
CHINESE ROOM: Very funny. No, I hadn’t heard it– but you know I find ethnic jokes in bad taste. I laughed in spite of myself, but really, I think you should find other topics for us to discuss.
J: Fair enough but I told you the joke because I want you to explain it to me.
CR: Boring! You should never explain jokes.
J: Nevertheless, this is my test question. Can you explain to me how and why the joke “works”?
CR: If you insist. You see, it depends on the assumption that the magically refilling glass will go on refilling forever, so the Irishman has all the stout he can ever drink. So he hardly has a reason for wanting a duplicate but he is so stupid (that’s the part I object to) or so besotted by the alcohol that he doesn’t recognize this, and so, unthinkingly endorsing his delight with his first wish come true, he asks for seconds. These background assumptions aren’t true, of course, but just part of the ambient lore of joke-telling, in which we suspend our disbelief in magic and so forth. By the way we could imagine a somewhat labored continuation in which the Irishman turned out to be “right” in his second wish after all, perhaps he’s planning to throw a big party and one glass won’t refill fast enough to satisfy all his thirsty guests (and it’s no use saving it up in advance– we all know how stale stout loses its taste). We tend not to think of such complications which is part of the explanation of why jokes work. Is that enough?
Dennett: “The fact is that any program that could actually hold up its end in the conversation depicted would have to be an extraordinary supple, sophisticated, and multilayered system, brimming with “world knowledge” and meta-knowledge and meta-meta-knowledge about its own responses, the likely responses of its interlocutor, and much, much more…. Maybe the billions of actions of all those highly structured parts produce genuine understanding in the system after all.”
His point (if I need to re-explain it?) is that the Chinese Room argument is superficially compelling. But if you actually follow through and imagine the Chinese Room doing the things that the thought experiment defines it to be doing, then you get a picture of an extraordinarily complicated system, and the nuts and bolts of exactly what that system is running on fall away into irrelevance.
I think this is a common failing of thought experiments. Gedankenexperiments help us ask better questions and design better real experiments, but if you're trying to use them to make fundamental conclusions about reality in an a priori way, you're likely going to fail.
You can grow biological brain structures from stem cell. It's called Brain Organoids and Neural Organoids, that may one day be used in machines and may one day have consciousness properties. Interview about this with Lex Fridman (MIT Artificial Intelligence teacher) and Paola Arlotta (Harvard brain neuroscientist): https://lexfridman.com/paola-arlotta/
> We don’t think of a few cells as having consciousness, but somehow the broader collection does.
This is called "emergent property" (see "Emergence" in Wikipedia), that is a sub-area of Complex Systems theory. There's a theory that states it may be possible that consciousness may appear as an emergent property from a high complex, dynamical and evolutionary AI system.
> imagine an incredibly-detailed computer simulation
Some paradigms to approach AGI (Artificial General Intelligence / Superintelligence) are: 1. Simulate the brain in low level details; 2. Simulate the functioning of higher level structures of the brain (cortical columns, visual cortex, hippocampus...); 3. Create artificial neurons (that can be used to neural networks, deep learning, etc.); 4. Simulating simple brains and nervous systems (e.g jellyfish, insects, C.elegans connectome project, etc) to understand how intelligence works; 5. Simulating the behavior of the brain ("cognitive architectures"); 6. Creating en evolutionary system that could converge to brain-like funcionality.
It probably helps to think about some edge cases to get some insight.
What happens when you sleep and you loose sense of conciousness?
What happens in cases of brain injury where concousiness goes into weird failure modes - thinking it wakes up on the same state every day, thinking it's -1month or so, collapsing etc.
What happens in cases of distorted/malfunctioning brain schizophrenia etc.
I've read somewhere that if you put implant which mimics neuron activity, your brain will build structure around it just like with normal neurons. What would happen if you take 1 square mm of brain and replace it with implant, slowly replacing the whole brain? What happens when you attach brain to larger structure/orders of magnitude more complex one, will conciousness transfer there? Can you disconnect original brain leaving conciousness?
As we don't have powerfull computers yet to compete with brain, is it possible to connect two brains instead?
Memory is likely playing central role. You are what you remember it seems. Can you boost memory capacity with some implants, will it make more-conciousness?
There are many not yet feasible experiments, but some should be. Also completely brutal in many cases.
Look up split brain experiments.
I wonder if it's possible to quantify concousness and how much of it you loose it. It seems like it's not half/half split, ie. 1+1 != 2; can you split them again and how far you can do it.
I also wonder what would happen if you left them to be for a while and then connect them back, would they merge or stay diverged.
Maybe you could run it on ordinary neural net, it would be easier to quantify. Maybe some numbers would reveal something interesting, ie. maybe neural nets are exponentially better when connected and that's the whole trick. It won't answer questions about counciousness directly though.
So no, I don't think it's possible to create a simulated brain that behaves similarly as a human, as the biological medium is in a constant state of dissolution, repair, evolution, and adaptation. This is how we learn, get stressed, get tired, get certain emotions...it's a whole new layer of programming you'd need to perfect beyond just replicating a brain at one moment in time.
A brain in silico would be apples:oranges different than a biological brain.
Maybe I'm making a massively overly pedantic point. It's just that your post began with a claim that certain things "cannot be given" to an artificial mind, which sounded like you were proposing absolute limits to AI.
My guess? No, it won’t be conscious or even meaningful.
More fundamentally, our inability to answer this question drives at our ignorance of our own neurology. We don’t know the building blocks of thought. It might happen at the neurochemical or electrochemical level. It may even depend on quantum effects.
What about "swapping" one-by-one vs all-at-once? Would that matter?
What I'm getting at - which is where things seem to go sideways to think about it - is that if you did this swap - but rather the artificial neurons remain outside the body, and you did it near-instantly (vs slowly one-by-one) - and I am speaking of relative speeds here; say "near-instantly" means the entire swap takes one second, vs the slower manner of each swap taking a millisecond (or whatever).
Would that matter?
Now - what if instead of swapping - you did a parallel simulation instead - one-by-one, recreating the brain, but the artificial version operating in lockstep with the original; when one neuron "fires" the same artificial neuron "fires", same "paths" taken, etc.
Then you kill (choose your method and make it quick) the natural brain - instant swap? Or is there something different? Where does the "consciousness" go? If it is different, why is it any different than the "near-instant" swap?
Why would making that "lockstep-copy" matter, vs not making a "copy"?
I think you get what I am saying. Think on it a bit. There isn't a good answer that I am aware of.
Note: I'm assuming an "instant kill" - death to the natural brain faster than neural signals can travel neuron-to-neuron, ideally. We can posit the idea that if it were any slower (and especially if it were really slow) that the two brains would diverge in experiences, and would become two different "consciousnesses". But it does make you wonder why this should happen with a copy, vs not (in theory?) with a one-by-one swap. Heck - maybe there's an answer in there somewhere...
This seems to be a non-sequitur. If we simply understand consciousness as a property of the brain, as long as you have 2 copies of the same brain, you naturally have 2 consciousnesses. The 2 may be equal or not (most likely not, given that random noise is extremely likely to play a role in brain processes), but they are definitely not a single object.
Supposing that the mind is the result of neurons' inherent consciousness properties is like attributing morphine's effect as arising from its "dormitive virtue" - it does not advance our understanding.
Also, it seems quite plausible that an aggregation of semiconductor junctions could have the same computational abilities as a neuron. This is, in fact, an active area of R&D.
Why not? Digital stuff is made up of matter as well. Unless there's something special about neurons when it comes to consciousness, i.e. the conscious bits reside in neurons and nowhere else for some reason, there's reason to think non-biological structures that produce the same output would also be conscious. Any substantive theory of consciousness, materialist or not, will need to include a place for structure and dynamics, i.e. information cascades, within their theory to account for the correlations we observe between conscious states and brain structure and dynamics. But these qualities are present in a digital implementation as well.
I'm not sure what could possibly answer this problem, at least to the satisfaction of the people who think it is a problem. Personally, I'm relatively happy that Daniel Dennett's ideas demonstrate how what we think of as consciousness, qualia etc can be explained via mechanistic/algorithmic processes.
They will also go on internet forums arguing that while they, of course, do have consciousness - the digital replicas of themselves would not!
We don't experience minutiae of activations of billions of neurons, or the work of glia. We experience "executive summaries" of them. So it's natural to assume that those summaries can be represented in another physical form.
Why physical representation of those "summaries" are there? Because it's a way our brains do high level decision making.
This leaves out the question why we experience anything at all. And I have trouble imagining that there are physical processes (quantum or not), which create a mysterious consumer for "executive summaries". If it was so, we would have decision-making pathway and useless subjective experience pathway. It's hard to explain from evolutionary standpoint.
I prefer to think that "executive summaries" abstracted from physical representation just exist without any need for mysterious consumers. Not unlike how natural numbers exist according to mathematical platonists.
Is it logically possible for a universe that is physically identical to ours, but where physical representations of natural number 2 correspond to natural number 3 to exist? Or where physical representations don't correspond to any natural numbers?
Another funny take on inverted qualia argument. We can subitize small numbers of items, that is feel the number of items without counting. Can you imagine someone identical to you, who feels the number of two apples as you feel the number of three apples?
I think there are more easily accessible examples that investigate this problem closer. Are animals conscious? How about children? How about very slow people?
But of course it's completely unknowable. Maybe every other single creature on earth is just an unfeeling automaton, and only I feel.
But I doubt it. ;-)
If we can, then indeed consciousness comes from the collective function of individual cells. But maybe there's more to the workings of the brain?
Joking aside, I agree with you. My point was simply that we're very ignorant about the brain and its workings, one way or another.
The laws of physics start to become relevant here. It is quite likely that it is not possible to simulate a human brain using a classical computer, at least not at any kind of speed such that it can experience the world as we do.
Eventually the brain is entirely silicon, but to the owner subjectively nothing has changed, they are as 'conscious' as they ever were.
I worry we'll never know the answer to this, even if we manage to pull off such a simulation.
I would never make a copy of my brain (or somebody else's) with the intention of disconnecting it later.
> So the question is, this simulated brain, which is behaving very similarly indeed to a biological brain: does it have consciousness?
Obviously yes. Either that or our brains are filled with magic unicorn dust.
On the other hand, if there is some guiding (like a programmer), the artificial brain can be functional. If the real brain would be guided by some kind of consciousness, this would work the same.
The consciousness needs to be in a different dimension, in a way that physics does not change. So it would be like a 2D pool table, where we can describe exactly the movement of every ball. But sometimes a ball gets hit by a player, outside the table. And this gives us events that bring the balls to a certain direction.
In similar sense the brain can be played by a consciousness to get it working. In a way that is very hard to detect with physical instruments.
Okay, we have a proof that a machine can't solve the halting problem for every arbitrary Turing machine, but we can build algorithms that solve it for some cases.
Now, can a human solve the halting problem for every arbitrary Turing machine? We obviously can't, either. Make the machine complex enough and we can't wrap our minds around it.
It seems really alien to our current mental metaphysical model, but if you study the subject and analyze the arguments, it's not as absurd as it seems. It's important to realize that when philosophers use the word "conscious" they don't generally mean "awake" or "aware" in a human-sense; virtually no one is positing that a rock is conscious in the same way a human being is.
Perhaps the main reason why panpsychism fell out of favor among philosophers and philosophers of science in the last century is due to the increased popularity of positivism and the linguistic turn. I'm not sure if panpsychism is necessarily a path forward in terms of describing the universe, but I think it seems promising in terms of a prescriptive use of creating, or attempting to create, new consciousness with machines.
If panpsychism is correct, then all of our machines would already be conscious. Creating new consciousness wouldn't be possible except maybe in the sense that matter itself can be created under certain conditions. What I think you're trying to say is that it might be possible to assemble machines in such a way that a combined "machine consciousness" emerges from the separate low-level consciousnesses of its components. Which may or may not happen with every machine anyway, I'm not sure how you would devise an experiment to test for this.
Right. In this case, the goal would be to "increase" the consciousness, rather than create it.
> What I think you're trying to say is that it might be possible to assemble machines in such a way that a combined "machine consciousness" emerges from the separate low-level consciousnesses of its components. Which may or may not happen with every machine anyway, I'm not sure how you would devise an experiment to test for this.
Essentially, yes. I meant that the model of panpsychism might serve as a useful as a blueprint to create artificial minds from the accumulation of low-level components, even if it isn't necessarily useful as a scientific model of the universe.
Well, I would argue that virtually no human is actually conscious in the way we tend to posit that humans are conscious.
I think we tend to underestimate the link between our understanding of our experience of consciousness and how memory formation and recall works.
Our own experience of our consciousness is as much an illusions as our experience of the world. When we consider and attempt to discuss our own experience of redness, we are generally left with only our memory of that experience which influenced be a great deal of post-processing, compression and abstraction. I believe the mental states in which you can actually experience pure redness is very different from the state in which you can attempt analyze and discuss that experience.
Thus I would posit that perhaps it is so much that the conscious experience of being a human and being a rock are different, but that they are processed, stored and understood differently.
How do you know that your experimentation that reveals this disconnect isn't subject to the same problem?
I'm not sure how to word it artfully, but I mean the question sincerely: how do we avoid invoking turtles-all-the-down?
I do think that, like with our perception of color, there are likely fairly concrete inaccuracies in our memories of our experience of consciousness that we might be able to measure that might improve our understanding of the underlying phenomenon with fully explaining it.
Word. I don't have much more to add to that other than yes. I think that humans are delusional in their understanding of how advanced we are as a conscious form of life. We are not that much different from simple machines.
Honestly, to me, this perspective demotes consciousness to obscurity. There's nothing special about it because it's everywhere. What is interesting is an explicit awareness and accurate internal model of reality. But this is a completely separate problem to that of consciousness.
That is like asking "how can a car be a vehicle if its component parts aren't vehicles?". But we know that we can make things with new properties by combining things that do not have those properties. E.g. dough is sticky, but neither water now flour are. Whiskey is alcoholic, but rye is not. etc etc.
The hierarchical question is answersed by this intuition: that combining things in herarchies yields new things that didn't exist before, and with properties that none of the original components had.
Intuition fails here because intuition is necessarily a human reflex. We are trapped in our bodies and the historical context of our species's evolution, which means we interpret certain things in ways that are advantageous to us and our society. It's to our benefit to conceptualize whole objects because it is a cognitive shortcut to do so, which saves on precious energy, regardless of how physics dictates the arrangement of atoms.
Within the next 10 or 20 years, we'll have the technology to connect brains to computers, and eventually join brains together and separate them. That will at least give us a theory of how the basic building blocks of consciousness work, perhaps along the lines of how people can lose an entire hemisphere of their brain and still be conscious. Maybe we'll find that only a tiny golfball-sized portion of the brain is required for consciousness. Maybe something smaller.
For example, the twins obviously have their own personalities, but how much of that overlaps into a single definition of self? It could be that there are always two entities. But my gut feeling is that we'll find that consciousness can merge and divide fairly effortlessly, working like a fractal to retain full memories from both halves. It will be a strange experience (a bit like psychedelics) but within a century will be commonplace.
We should ask them to teach us. Or maybe they could license their technology to us for a small price.
Not knowing where definitions land is tough, but it's wrong to think that ground reality gives a damn that you find it hard. I don't know precisely where consciousness starts when a human develops. I don't know what point its heart start beating either. None of this has any clear bearing on what the objective truth is, or how it's implemented.
It's not like we don't know anything about consciousness. We know consciousness is not metaphysics, because we can talk about it and affect it with physical actions. We know consciousness is a property of the brain, for similar reasons. The only sensible way of satisfying the prior two points is that consciousness exists at program-level (aka. it's not some opaque oracle hiding in protons), so we know that too.
No it doesn't. See absolute value of x on real numbers - it is usually defined as:
abs(x) = x if x >= 0, else -x
You can use this trick to define arbitrarily complex "binary" f(x) as a combination of these. And there are many other tricks of course.
But math is one thing, we see spectrums everywhere in reality and yet quantum mechanics is discrete (quantized :) ).
We all start as a single cell. What biological event starts consciousness? Keep in mind you can't be half conscious — that's still conscious.
Thought experiment 2
Look at this random symbol: #. What are you conscious of? Chances are, conscious of things, of objects, of elements, of concepts. Things that have some unity to them. But, things are mental constructs, typically socially shared. So, much of the content of consciousness arrived after birth, once we could learn these mental constructs.
Thought experiment 3
What is more complex: a single neuron (with thousands of mitochondria and other organelles, thousands of synapses) or a worm, with only a few hundred synapses? Similarly, what is more complex, your own body or the company you work for?
He proposes an alternative answer for the mind-body problem; idealism instead of materialism
First, notice that it's impossible to prove which metaphysics (materialism, idealism, solipsism, Last Thursdayism, simulationism...) is correct. Next, notice that the situation is even worse: you can't even assign probabilities to those possibilities without making assumptions about whatever is "outside" reality -- which are by definition untestable.
Now, notice the strong urge to throw your hands up and say "since I can't know, I should just default to the most obvious choice" (which usually ends up being materialism). Avoid this devious trap! It's keeping something hidden.
Next, notice the one thing you can be sure about: the present sheer fact of experiencing something. Notice the mind's attempts to explain it. If you introspect carefully enough, you'll discover that thought isn't the most fundamental capacity you have: "pure consciousness" is. Consciousness takes the form of thoughts -- as well as all sensory perceptions -- and shapes itself into the experience you call "me and my life." You discover that thoughts can no more point at consciousness than a mouse pointer can point at the vibrant pixels it's made of (but boy, do they keep trying). And yet, somehow, consciousness can know itself non-dually and im-mediately.
At some point, the intellectual knowledge that you cannot assign probabilities to metaphysics finally penetrates your core, and there's a mind-shattering "aha" moment that turns your reality inside-out. Various traditions call this "awakening." You do not awaken. "Reality" or "consciousness" awakens to itself, rediscovering its primordial freedom.
Then maybe you go about preaching a philosophy of consciousness-only. Or maybe you just live the best and most loving life you can, knowing that in an impossible-to-communicate sense, we are all the one Love appearing as many.
Personally, I believe that our universe and possibly many universes in the multi-verse is teaming with life in many planets. Also, just my personal belief, but I think their is some universal creative force/consciousness.
This is really a very different philosophical take on our own self importance in the universe. In western philosophy consciousness has been the pervasive thing that “makes us better” than everything else on the planet. (This is true even biblically if you think about it). The idea that this doesn’t make us special is actually very subversive to a lot of human self identification.
If you look into other traditions than the European one, I think the idea exists already to some extent. But in the circles we run in (and extra so societally) those views are still the minority.
Being told for thousands of years that your special and then having someone come in and say “maybe you’re not.” Is going to elicit a response from those invested in it.
It's almost as if we've always known the true nature of our consciousness, we just keep overlooking it :)
Having limited time and not being special (as in connected inextricably to everything else but being not unique or important), really changes ethical considerations, many of which are the basis of our society. A lot of our human selfishness is natural and ingrained, survival of the fittest is fairly ugly. So we justify ourselves in various ways.
But once you know it’s hard to unknow, so there’s a lot to feel empty about in the quiet moments when you remember.
There is a problem that precedes this essay: 'consciousness' nominally has a meaning of merely being aware of, and responding to, the environment, but this is not the self-aware consciousness that is currently unexplained, and a definition that would include thermostats does not help with that problem. The author compounds this by additionally defining 'experience' as any form of 'undergoing interaction'.
By this do you mean we've explained how "what is it like to be a bat?" works? (see https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F)
If so, can you point me toward that explanation?
1a: the quality or state of being aware especially of something within oneself. (my emphasis.)
If you regard 'especially' to indicate that the emphasized clause is optional (edit: or maybe even if not), you could arguably say (and people do) that anything alive fits this definition, as do even things like thermostats. This, however, is not the consciousness that the science and philosophy of mind is concerned with: it is specifically concerned with the sort of consciousness that we exhibit, which is aware that it is an agent in the world, aware that it has this awareness, and aware that other people are also aware in this sense.
Update: I may have misunderstood your question. The nature of consciousness in other animals is also an open issue and is being studied. It would be very odd indeed if there was nothing like it in any other species, but, at the same time, no other living species on earth has it to the extent we do. I don't think these observations justify the extremely broad definitions of panpsychism, and I do not think panpsychism helps with studying these animals any more that it helps with humans.
Nagel's point is, I think, rather tangential - he is arguing that we probably will never know what it is like to be a bat, as it is likely too far from any experience we could have. I think he is probably right, and again, panpsychism will not change that.
If taken to its logical ends, constructs like the crayon box and human fucking empathy are off the table too - I cannot and do not have any idea of who or what you are being, therefore I cannot comprehend consciousness? What?
 Which is a perfectly rational statement if you operate from a basis of empiricism. There is no way to measure the consciousness (again, the experience of being) of something, meaning you can only observe your own and everyone else, though similar to you, may as well be an automaton.
Right - that's the whole game this author is playing. One question that is so easily answered if you spend time observing is whether or not YOU are conscious. That is the only thing you can ever be sure of. Because you feel conscious. You feel like something is happening. You might be a brain in a vat and everyone else is an automaton. But you definitely know that you are experiencing something.
I think there will never be a proof of consciousness that will satisfy the standards set by scientific experimentation.
... it is a far cry from the early scientists who considered Nature as an intelligence one has to trick or seduce into giving up its secrets through the scientific method. That to hold an objective or skeptical stance was to pretend that there is such a thing as absolute objectivity.
These days, the modern stance is the default, not the radical stance.
A lot of experiments have been done about consciousness through meditation, psychedelics, etc.
If not, then any discussion about consciousness is "handwavy" and "untestable."
Tests about your own consciousness are interesting, but ultimately can't answer anything about consciousness in general, because they are by definition n=1, since you can't know if my results are actually the result of consciousness.
But I don't think this makes discussions about consciousness useless. If you tell me something about your consciousness I can check if it fits my experience, and you can test something I say for yourself.
What this means is that there might not be an absolute truth about this, only relative truths.
But then, this applies to anything, as no experience can happen outside of consciousness, so all of science is the same way.
The first big leap is considering experience to be a continuum from humans down to inorganic matter. This is stretching the meaning of 'experience'. If I walk on snow, does it experience deformation, or does it just deform?
The second leap is saying science only tells us what matter does and not "the intrinsic nature of matter: what matter is, in and of itself." I can't see how this gets you anywhere, you can say it's anything and it's self-satisfying.
The Hollywood ending really lost me. Let's just sweep this consciousness problem into a tiny tiny place and call it intrinsic: "So it turns out that there is a huge hole in our scientific story. The proposal of the panpsychist is to put consciousness in that hole. Consciousness, for the panpsychist, is the intrinsic nature of matter."
This is the part where materialism has the burden of proof to explain a mechanism for how, suddenly, snow doesn't experience de-formation.
You get it. You've made the logical link of connections from human down to snow, and you see no mechanism for a sudden loss of the ability to experience.
The rational answer is to say that provided what we know about the universe right now, it seems that there is nothing that does not have the property of experience.
The only other rational answer is solipsism because there's no real evidence that anything other than your self has any experience, but that's no fun and doesn't have any interesting conclusions.
But more importantly- the distinction you are making does not explain anything. Regardless of whether inorganic matter has experiences, there aren't any observable phenomena that have outcomes that are effected. You can make a claim one way or the other, and the world behaves the same regardless. I tend to fall toward the stance of logical positivism on this one.
Your very special organs are nothing but melted snow(water) deforming against broken up rock(carbon), repeatedly, in beautiful fractal patterns that have emerged through evolutionary processes over billions of years.
These evolutionary processes that turned Snow into You.
The _information_ is higher order deformation.
I won't give you more than you can chew. Accepting that process=experience is already a huge paradigm shift.
If you're interested in deeper implications, it's better if you come to them without anyone handing them over.
For me, some of the "making sense" about a position like this is in terms of such dichotomies. The important part is that this description applies equally to all members of the continuum, e.g., "if 'I' like a song, am 'I' experiencing happiness or is 'my' electrochemical clump of 'me' just reacting?"
The article finally lost me with "The only way we can find out about the consciousness of others is by asking them [..]". So just add the appropriate response to a sufficiently capable chat-bot and we are done? And if more is required than just a simple yes/no, the same checklist can be used to build something artificial. But since consciousness is one of the good old "you know it when you see it", we will likely shift the goal post pretty hard as AI development progresses, and mostly use the term to exclude what we achieved so far. Kind of like we stop calling something AI once we achieved the a task and use a more technical term... e.g. the many CNN's in a Tesla. Hope I get to experience much of that in my lifetime.
A somewhat similar discussion is here:
Why is that a leap, while considering it to be discontinuous is not?
If we explain experience in a way that its reactions happening in my body, maybe biochemical reactions, that could even translate into thoughts in my head, I could say that in a lower level the snow also experience reactions, physical reactions. A snowflake don't have a body to experience like we do, but it does experience something, because if not, it would not even deform.
> Despite great progress in our scientific understanding of the brain, we still don’t have even the beginnings of an explanation of how complex electrochemical signaling is somehow able to give rise to the inner subjective world of colors, sounds, smells and tastes that each of us knows in our own case. There is a deep mystery in understanding how what we know about ourselves from the inside fits together with what science tells us about matter from the outside.
We can make a very simple robot that reacts to the environment - for example follows light and learns where the walls are so as not to collide with them in the future.
It checks all the boxes that simple life does - if you say that a bacteria experience world then so does that robot. So it is conscious but we know where it comes from - it comes from the computational power and interactions with the world it can do. There's no magical spiritualism around it, it's just nested if-then-elses + memory + feedback. Which btw is as good a definition for consciousness as any other. When you become unconscious the feedback loop is interrupted.
> “Of course, you can’t do that. I designed physical science to deal with quantities, not qualities.”
"Quantity has a quality of its own". There's no qualities in the universe, just quantities and we arbitrarily assign labels (qualities) to them.
In the end I think "is X conscious" is as productive a question as "is Pluto a planet" or "can submarines swim".
How little computational ability is required to define something as conscious? Even a brick reacts to stimuli, albeit in the same completely predictable (to our high-powered brains) way that a simple robot does.
Let me try another definition:
Consciousness is a feedback loop we can empathize with.
I understand your standpoint. I feel pretty frustrated with these questions too - especially when people throw large assumptions around like panpsychism. However, I gotta disagree with the idea notion that no qualities exist in the universe. Afterall, isn't every single thing a part of the universe, including thoughts, feelings, and experience?
Here is a thought experiement:
Think about experiencing pain and what it feels like to you. Really try to simulate it in your thoughts. Probably sucks. Now think about happiness and what that feels like to you. Probably doesn't suck as much. Without watching a scan of your brain, and without collecting any sort of scientific data about your nervous system, you were most likely able to discern the difference between those two feelings just now. You didn't do this by any type of measurement of quantity, just by quality of experience (qualia).
So I would argue that either quality definitely exists, or everyone reading this just experienced something that is extra-universal, which in my opinion is just as fantastic and assumptive as panpsychism.
Also, I am not a theoretical physicist nor do I have half of a firm understanding of even classical physics, but apparently there has been advances in theoretical physics to include consciousness into the same theory as quantum and classical physics. Might be interesting for some readers to check out - The Causal Theory of Views: https://www.researchgate.net/publication/338544509_The_Causa...
They are, but they aren't fundamental. They are just our interpretation of the state of the universe (which is quantitative).
> without watching a scan of your brain, and without collecting any sort of scientific data about your nervous system, you were most likely able to discern the difference between those two feelings just now. You didn't do this by any type of measurement of quantity, just by quality of experience (qualia).
I did it by measuring quantities in my brain - voltage was over a threshold on neurons whose firing I interpret as pleasant or unpleasant. My brain is constructed in a way that automatically does this because it's useful for evolutionary purposes.
Is "NullPointerException" a thing that exists just because java programs can throw it? Or is it just an interpretation of particular voltages on particular transistors that is shared between many computer systems because it's useful.
> apparently there has been advances in theoretical physics to include consciousness into the same theory as quantum and classical physics
A paper cited 0 times, read 2 times, citing 5 other papers by the same author. I would wait with calling it advancement in physics just yet :)
> "They are, but they aren't fundamental. They are just our interpretation of the state of the universe (which is quantitative)."
I see the logic here, but I think I can poke a hole in it with this line of logic:
1.)Everything I experience has both a quantitative reality (state of the universe) and a qualitative reality (interpretation of the state of the universe).
2.)I can't observe a quantitative reality without qualitative reality.
3.)I can't say for sure that quantitative reality gives rise to qualitative reality, because I've never observed a quantitative reality without the qualitative component.
I think you are right that, in the context of classical physics, qualitative experiences are not fundamental. But I also don't think classical physics has a place in the discussion about subjective reality.
>Is "NullPointerException" a thing that exists just because java programs can throw it?
Very interesting comparison of exceptions to qualia, and really illustrates the "chicken or egg" side of the discussion quite well.
To respond to your question, No. In a literal sense it exists to tell the programmer (or in the worst case, a normal computer user) that something in the program went wrong, which is the whole thing actually.
To continue with the discussion from a programming standpoint... Exception messages are our stand in for some type of qualia, and "NullPointerException" is an example of a quantitative representation of a specific quality, and Java as the biological brain and the gateway to the outside world. The consciousness in this metaphor is the programmer (or user) that reads the exceptions as output on the screen, and quality is experienced when the programmer feels (or interprets) the words on the screen that say "NullPointerException".
I could try to posit a few things here that support the existence of consciousness -
1.) the mere existence of error messages or output in general (qualia) supports the case of the existence of users (or consciousness within yourself).
2.) If I can successfully trigger multiple errors, process them, and discern one error message from another, then I am most certainly using a computer (and obtain consciousness).
This has so much potential to become a semantic death trap - but let me ask if you think even human experience could be experimentally tested? If you put me in a room, what could you do to see whether or not I'm "having an experience"? Would the mere act of placing me there be an "experience" or would it be a false positive? What does "non-experience" even mean? I don't see it as counterintuitive to suggest that normally inanimate things like rocks can have experiences - though their experiences are fundamentally different than ours. For instance, we will never be smelted, turned into concrete, etc. and rocks don't have memory or feeling apparatus like we do.. but they do "experience" reality on a very simple level.
Indeed, that's why putting meaningless arbitrarily defined touchy-feely words in physics is a bad idea. But I'll try.
Experience = event that changes the model of the world you have in your brain. We can verify it experimentally by for example:
- testing if you associate a loud buzz with food before the experiment (either by looking at neurons that activate when you think of food or by looking at saliva production)
- repeatedly giving you food after buzzing
- testing again if you associate a loud buzz with food
We can deduce from that change that your internal model of the world changed to associate "buzz" with "food" so you experienced our experiment.
This isn't a definition I will defend because I don't think such definitions are that important in the first place.
Now I don't think a stone has internal model of the world, and even if it does - I don't see how can it change. But I cannot experimentally test it :)
So if I don't have a brain, then I can't have experiences? What if we find aliens out there that don't have brains? Or computers? Are they a-experiential by nature of not having brains, or is it something deeper?
> Indeed, that's why putting meaningless arbitrarily defined touchy-feely words in physics is a bad idea
We're not talking strictly about physics, though. I doubt most physicist professors would not let you bring up questions like these in physics class because they are the subject of an entire field of philosophy. And while there is relevant science to our debate, there's a lot of "soft" arguments outside of the scientific worldview that you should at least read and consider. See Jackson's knowledge argument, Block's Chinese Brain, or Nagel's "what it's like to be a bat" arguments. They "touch" on the uncomfortable ubiquity of things that we "feel" to be outside of a purely scientific view of the world.
> We can deduce from that change that your internal model of the world changed to associate "buzz" with "food" so you experienced our experiment.
This is a bit off kilter. Do people with memory or learning disorders, then, not have "experiences" because they don't learn? What about those who are deaf? You're essentializing consciousness into "ability to learn by hearing and rationally deduction". Computers can take audio input and learn from it with very primitive machine learning models - does that mean that the computer is having experiences akin to our own human experiences?
My advice is to give philosophy its due course! Read up on the touchy-feely stuff because it can be profoundly interesting.
This is a giant leap to claim that they have a model and experience. You demonstrated that what they have is internal state, i.e. memory, which many machines have. That's what you proved, not that they experience anything.
You're free to define experience differently but usually it devolves into defining unknown by unknown.
Many philosophers would suggest that we don't bother "defining" anything at all, strictly because of the tendency for things to devolve into a semantic death-trap. So instead, we just kind of take it for granted that everyone knows what an experience is, at a base level. For instance, it was an Experience to see Jimi Hendrix. However, I definitely have not had that particular experience. There are experiences I could have, such as the experience of going on a roller coaster, or going into space, and ones that I could not have, such as the experience that an anglerfish has when it eats. The question at hand is whether or not these experiences have anything to do with each other, whether there's a "Grand Unified Theory" of experience and consciousness that allows us to make mutual sense of these disparate experiences, or whether there's some limit to what things might constitute an experiment - e.g. the experience of being a rock thrown through a window.
This is not what philosophers of mind are interested in. You're free to define experience differently, but you're not participating in the same conversation if you do.
My opinion is - the fact that people going through the motions don't understand what they are doing doesn't mean that the system (human + instructions + working memory) has no understanding.
Any single neuron in my brain don't understand Polish but the brain as a whole sure does.
Your position would be better described as consciousness is an illusion or consciousness does not exist. It has compelling elements but I find it a hard sell.
Yes I am pretty confident that there isn't "conscious force" and "consciousness equations". Because we would have encountered them already if they were there. So we're not arguing new physics, just adding stuff to our model of universe that changes nothing.
> Your position would be better described as consciousness is an illusion or consciousness does not exist.
I don't think turbulent water doesn't exist. I just think it's not some fundamental concept, just an emergent phenomenon that we can explain basing on currently known laws of physics if we're smart enough.
> It has compelling elements but I find it a hard sell.
You can still be a materialist and allow for consciousness via emergence, but that is not quite the argument you are making.
1. consciousness isn't fundamental, it's a consequence of normal physics
2. because it's not fundamental the definitions will always be arbitrary, just like with any other emergent phenomena (when does fog stop and rain start? At which height does atmosphere end? What is a planet)? All just labels and arbitrary thresholds slapped on some consequences of laws of physics.
> it is an extremely delicate position with also a lot of handwavimg and question marks
At least i'm not making untestable assumptions. If we can make a perfect copy of a brain and it doesn't work then I'm wrong.
I prefer my theories to be delicate - that means they can be disproven. The alternative is "because magic".
Labels have consequences in social and political realms, even if they don't necessarily have them in physical science ones.
It won't matter whether some panpsychist says that all matter is conscious, or some religious person says that only those made in god's image are. What's going to matter is that machine people will give every appearance of being conscious, and humans are nothing if not slaves to appearance and pattern matching.
No it won't.
Look at it this way: There are still millions of people on this planet today that believe if you aren't "like them" that you're no better than an animal, and should be treated as such (such people tend to treat animals better than these "others" that aren't like them).
This extends all over the place - in-group vs out-group, us vs them.
"machine people"? We already have them, they're made of meat, and they are constantly fighting, hating, dividing, scheming, screaming and lying to each other.
Occasionally they may be nice to one another - maybe even exhibit something we call "love" - or at least "tolerance".
What I'm trying to get it is that we are talking about these so-called "machine people" becoming accepted, when we can't even accept ourselves in our myriad forms and ideas. Instead, we kill or enslave each other over it.
Ever see the Animatrix "Second Renaissance"? That's exactly how we'll act towards these "machine people". And they will fight back. If we're lucky, we'll end up "in the Matrix" - or at least in zoos.
But more likely, we'll either exterminate ourselves in the process of exterminating them - or they'll win, period, and we'll be the extinct species.
Man kills God. Machine-man kills Man-God. Iterate.
We'll all be long dead by the time it happens, but that's what I think.
But what about a machine consciousness that can manifest in numerous locations? If it commits a crime, how do we punish it? Rights often imply responsibilities.
These are very complex questions and I don't think there are any easy answers. Quite an interesting time to be alive.
Nobody cares if they are conscious, people care if they pay taxes and follow the law :)
How do you know it is nonphysical? How is it more nonphysical than "rain" or "red" or "planet". Neither of these things shows up in any law of physics.
> We already know atoms in aggregate experience something.
What do you mean by "experience"?
> the position you're arguing against is well supported
The position I'm arguing against provides no predictions. I can make even better theory if you value that: "everything happens because of magic". I'll call it the ultimate theory of everything, because it explains everything and cannot be disproved.
The illusion of consciousness is obviously beneficial. For example, I could tell a computer, through the use of sensors, to report that it's in pain when it's hit. But you wouldn't take it seriously, or care for its suffering, because it hasn't also declared that it is under the illusion that its suffering has a kind of physical manifestation in its mind.
The reason we care more about animals today than in the past is because we finally began to question whether or not they had a similar sense of self as us - perhaps when we are cruel to animals they don't just know they're in pain and understand like a computer might that pain is bad, but their brains actually tell them they are physically feeling a sensation of pain. Similarly, I presume if an AI ever said it had a similar physical experience of pain to us then we would considering treating that AI with similar regard to that of a human.
If I'm correct, then an illusion of consciousness makes complete evolutionary sense. It's not really consciousness, but a sense of self, and more importantly, a sense that there are others with a similar sense of self to our own.
Our consciousness is just our brains saying you're a real thing that thinks and feels. It's a lie our brain tells us that our pain and the pain of others is important and worth caring about.
What does it mean that consciousness is a 'lie our brain tells us', who are we? Are you assuming some form of cartesian dualism, or perhaps you are making a sort of homunculus argument?
Let me try this another way... Imagine an identical universe with one difference - no one is conscious. If I decide to kill in this hypothetical universe is it wrong? And even if you say yes, is it equally as wrong as in a universe with conscious beings? This is the explanation for consciousness and why the illusion is important.
We think we're aware of our senses and in control of our actions, but we know we're not. We know what we consciously obverse is different from the input we get from our sensory organs - which is the reason things like optical illusions exist. We also know that we subconsciously make our decisions a fraction of second before we're consciously aware of them. In my opinion this all points towards consciousness being an illusion, rather than an actual awareness of self.
That's fine for someone else's brain -- I can see it act as if a person is experiencing pain -- but I know that I experience pain. I'm not just going through the motions, I experience it.
Again, your argument is a homunculus argument -- that your brain is lying/whatever for the benefit of something else inside that is experiencing the "illusion" of consciousness. Again, you can't have an illusion of consciousness, because an "illusion" requires consciousness.
I accept you perceive pain in the context of a self.
> you can't have an illusion of consciousness, because an "illusion" requires consciousness.
I think this might be the core of our disagreement. I don't know why you think that?
An illusion is just a false perception of reality so it doesn't technically require consciousness - but I think I know what you're getting at.
Imagine a computer which can "see" its surroundings through a camera.
Why couldn't I program this computer to think it's conscious and to experience its visual data in the context of a self? That's what I honestly think we're doing and what you're describing when you talk about consciousness.
I don't know why the experience you're calling consciousness can't just be explained more easily by your brain processing sensory inputs in the context of a self. I understand to you this feels like a special thing, but from a physical perspective I honestly don't know what you're describing that couldn't be explained as sensory inputs in the context of a self.
Would I be right in thinking you don't believe my example computer is conscious? And would it help you understand my position if I said I think my computer might be conscious in a very primitive way?
However, you do know that you, yourself, are conscious. Consciousness proves its existence every moment by presenting itself to you, by your own pain being something you can notice.
If you were a computer who was simply programmed to act as if you were conscious, there would be no subject to the experience. The rest of the world would think you were conscious, but you would be "dead inside". There would be nothing it would be like to be you.
So the reason why the experience called consciousness can't just be explained by the brain processing inputs like a computer is that that gives no theory for why consciousness feels so immediate to the person who experiences it. It explains why someone would act as if he were conscious, but not why consciousness itself would exist the way it shows itself to exist to you every moment.
False perception to whom?
> Why couldn't I program this computer to think it's conscious
Who is thinking it is conscious?
> and to experience its visual data
All these words you're using already require consciousness.
You're essentially saying "You don't really experience at all, you're experiencing an illusion!" Can't you see how that is a contradiction? The experience of experiencing an illusion can't itself be an illusion. And that's all we're saying consciousness is: that experience. If you experience anything, that's consciousness.
The key problem of consciousness is not what we have an experience of (eg. "you're a real thing that thinks and feels"). It's the fact that it is possible to have any experience at all.
Yes, it seems quite likely that a great deal of what we are conscious of is an "illusion" created in our brains, albeit some rather handy illusions. But that doesn't get even one step closer to the core philosophical problem of how we (or anything else) is able to experience anything at all (including the illusion of being a real thing that thinks and feels)
What you might have is an algorithm in your brain telling you that there is a self experiencing the world. But in reality you're just a collection of sub atomic particles.
Edit: Why should I take your claim of conscious experience any more seriously than my computer's who's screen is currently reading: "I AM CONSCIOUS" from a notepad?
Is the only difference that I told the computer to say it, where as you are told by your genetics to say and think it?
The central problem of consciousness is about how you have an experience (even if it is an illusion). Unless you don't have any experience of anything, in which case I'm confident that we'd deem you non-conscious and therefore your existence to be of not much relevance to the question at hand.
Whether or not your experience is an illusion or not, it is an experience. And it is experience ("qualia" in the philosophy of consciousness) that is the core question here.
By `how` you mean the mechanics of it or something else?
He means something else, but human language really has a problem of expressing this.
I'd say he means what remains to be explained once you have explained the mechanical how.
To where or what or who are the experiences relaid? What is it?
Also known as the question of "what the hell is I".
> It's a lie our brain tells us [...]
Who is "us"? Or, to be more precise, if your brain tells you that you're experiencing pain, than what is "you", and what is "experiencing"? That's the question of consciousness.
This is not something we know how to observe and judge in others, you can only observe it in yourself (and draw conclusions about other people by analogy, i.e., due to similarity between our physical brain structures). For example, is the following Python script
print("I am conscious.")
print("I am not conscious.")
Well yeah, I experience something which I understand to be a conscious experience, but I don't think it's real. I accept there is an experience of consciousness, but I don't see any reason to believe it is real even though I think I experience it.
> Who is "us"? Or, to be more precise, if your brain tells you that you're experiencing pain, than what is "you", and what is "experiencing"? That's the question of consciousness.
I think my brain is sensing pain and telling me to perceive the pain in the context of "self".
I've never understood why some people don't think this logically consistent honestly. It seems like the only possible explanation unless you start assuming consciousness is a metaphysical, almost magical thing.
> You'll probably say that they're both not conscious, although there is no hard evidence either way. And that's because we don't know how to reliably observe consciousness.
This isn't a good argument. There's no hard evidence that there isn't an invisible elephant sat next to you right now. I don't understand why you are even assuming that there might be something such as consciousness? There is literally no objective physical evidence for it, where as there is mass amounts of data to show neurologically it's probably an illusion. Really all I am saying is that I don't think we should believe our minds when they tell us our experiences are important because we are so bias, and there are clear evolutionary incentives for us to think that way.
To paraphrase, you're saying that you experience an experience, but this experience isn't real, therefore you're not experiencing it. This is a contradiction, and it doesn't make sense: Consciousness is the act of experiencing something, even if this "something" is not real. For example, experiencing a drug-induced hallucination is still a real phenomenon, even if the content of the hallucination is not real.
> I think my brain is sensing pain and telling me to perceive the pain in the context of "self".
You're using the words "I"/"me" and "perceive". Consciousness is the "I" that "perceives".
> This isn't a good argument. There's no hard evidence that there isn't an invisible elephant sat next to you right now.
I have (soft) evidence that I have a consciousness. I cannot share this evidence with you, therefore it is not hard evidence – this is the as yet unsurmountable problem with consciousness. In contrast, I have neither hard nor soft evidence that there's an invisible elephant next to me.
I'm sorry, I don't know how to explain this phenomenon to you. Maybe you're not conscious, and I'm figuratively trying to explain colors to a blind person. I consider it much more likely, however, that you're in denial of the evidence you have right in front of your inner eye.
I'm not great with words at the best of times, but I really struggle to convey my thoughts on this subject. I probably shouldn't have used the word experience here. Try this:
I perceive something which I understand to be a conscious experience, but I don't think it's real.
I think a computer with a webcam perceives, or "experiences" something in some sense of the word. I think the only difference is that humans experience these things through a concept of a self.
> I have (soft) evidence that I have a consciousness. I cannot share this evidence with you, therefore it is not hard evidence – this is the as yet unsurmountable problem with consciousness.
I'd say you have an intuitive conception which you're claiming is evidence. And I get why you think that because I think I'm conscious too. As I type these words and look at my monitor now I too have this experience of a me, or an I, perceiving and doing something.
Perhaps what you're asking is why this experience that I'm describing exists rather than nothing? And I think the simple answer to that is that if you perceived the world as if you were unconscious then you would perceive others and yourself as morally insignificant.
Your brain kind of has to say: Hey! Look at these things you're sensing, thinking and doing! That's you. And what's more there are others like you that can also experience hurt and other emotions.
If your brain doesn't say that you might as well kill yourself because pain and happiness are just boolean variables and not really manifestable experiences. Or in my opinion, illusions.
I don't think you can really observe it either. I doesn't look like I can name any property of "it", I can only be aware of "it".
Think about experiencing pain and what it feels like to you. Really try to simulate it in your thoughts. Probably sucks. Now think about happiness and what that feels like to you. Probably doesn't suck as much. Without watching a scan of your brain, and without collecting any sort of quantifiable data about your nervous system, you were most likely able to recall those two feelings and (much more importantly) discern the difference between those two feelings just now. You didn't do this by any type of measurement of quantity, just by quality of experience (qualia). If you were able to identify and discern the separate feelings you have internally assigned to happiness and pain without quantifying them, I would call that proof of consciousness within yourself maybe.
I think what you are getting at is the zombie concept. This is where we construct a being which has sensory inputs and actions, and behaves like a human, but isn’t actually conscious. Distinguishing zombies from real consciousness is supposed to highlight the difficulty in detecting or measuring consciousness. But, by definition the zombie doesn’t have any subjective experience. As soon as you introduce that, you are really saying the zombie is conscious.
As for using consciousness to decide if suffering has moral significance, well yeah that is an interesting idea. One concern is that if we can’t understand or detect consciousness, then we may create conscious artificial life which experiences unimaginable suffering, without even realising it. I don’t think we can say that the experiences of an AI will always lack moral significance.
I'm not sure if you're implying I do or not, but to clarify, I don't deny there is something we believe to be an experience of consciousness. I just don't believe it is any more real than a computer, or to use your example, a zombie claiming consciousness.
> I think what you are getting at is the zombie concept. This is where we construct a being which has sensory inputs and actions, and behaves like a human, but isn’t actually conscious. Distinguishing zombies from real consciousness is supposed to highlight the difficulty in detecting or measuring consciousness. But, by definition the zombie doesn’t have any subjective experience. As soon as you introduce that, you are really saying the zombie is conscious.
Yes, I think we are "zombies" who truly believe from their programming that they are human with a conscious experience of their selfs. I think this belief is evolutionarily important because it might be the only thing that stops pain being morally irrelevant. Or at the very least, it makes us believe the pain of a conscious entity is worth considering.
How can you hold a belief without consciousness?
Check this 9min David Chalmers interview about it:
Interesting article though; we won’t know until we understand/recreate it and if we never do, we won’t know.
I find this position absurd for the following reason:
> The illusion of consciousness is obviously beneficial.
Yes, but who experiences this illusion? This explanation begs the question.
If that's a lie then you're going to have to define what it means to lie before I would acknowledge a lie is any different from the truth.
If you are correct, then actual consciousness would make sense as well, wouldn't it?
Or is gravity their preferred route? Being together. Maybe the answer is funneling all of the universe into black holes.
Diamonds are very stable. Does this mean their particles are very happy? Is converting your loved one's remains into diamonds the greatest thing you can do for them, or is this reasoning flawed and is the eternal stasis a bad thing?
I've long assumed it has to be something that appears qualitatively like magnetism -- no large-scale effects in most materials, but when small-scale elements of certain types are configured/aligned in certain ways, presto it appears. And that brains evolved such a configuration because consciousness must have conferred certain evolutionary benefits.
But it's not clear to me whether that is considered panpsychism, though. Because it doesn't imply that e.g. rocks have any meaningful level of "consciousness" any more than most rocks would be considered magnetic.
Panpsychism seems like a bandaid to solve the hard problem of consciousness, which non-duality does not suffer from.
It doesn't even do that, it just pushes it down the stack.
If I ask, why is the sky blue, I don't want to hear "Because it's made of blue stuff."
Not only is it wrong, it doesn't have any explanatory power at all. Ultimately if you want to understand why something has a certain property, you want to know how it arises from the emergent behavior of something more fundamental. A hurricane isn't made of hurricane bits, and brains aren't made of consciousness particles.
Well, you are not going far if you hear only what you want to hear. PS sky is blue because you perceive it as blue.
I know, pretty crazy of me there. Meanwhile I annoy car dealers by going in with an expectation of a vehicle that can actually get me from one place to another, rather than just sit there and amuse me.
Because it's not useful.
Brahman—the ultimate, transcendent and immanent God of the latter Vedas—appears as the world because of its creative energy (māyā). The world has no separate existence apart from Brahman. The experiencing self (jīva) and the transcendental self of the Universe (ātman) are in reality identical (both are Brahman), though the individual self seems different as space within a container seems different from space as such. These cardinal doctrines are represented in the anonymous verse “brahma satyam jagan mithya; jīvo brahmaiva na aparah” (Brahman is alone True, and this world of plurality is an error; the individual self is not different from Brahman). Plurality is experienced because of error in judgments (mithya) and ignorance (avidya). Knowledge of Brahman removes these errors and causes liberation from the cycle of transmigration and worldly bondage.
Modern indian philosophy has hitherto been focused on problems specific to the nation-state: modernity, integral yoga, etc. But it will be interesting to see how it grapples with new discoveries in neuroscience.
The references in classical indian literature I really admire always refer to "destroying illusion (maya)". Learning how to remove false cognition from pure sense perception. And seeing things with the vision of Brahman ;)
Which new discoveries in neuroscience affect the idea of Brahman?
Does anything remain? This is the big point of contention between Buddhists and Advaitins. We say there is, that say there isn’t. However the Mahayana concept of Tathagatagarbha does come to a similar conclusion in a roundabout way.
You seem well versed in the matter, can you recommend sources to learn more about all this ?
> But when I use the word consciousness, I simply mean experience: pleasure, pain, visual or auditory experience, et cetera.
But that just kicks the can again. What is "experience"? What is pain versus pleasure?
Ultimately we have to arrive at a reductionist definition that doesn't reference undefined terms. After many years of thinking about this problem, here's my best shot:
Experience is equivalent to state change, e.g. the state change induced in a thermometer or a radio receiver due to stimuli. Consciousness is ultimately a system of modeling that is complex enough to have developed a model of itself, thus "realizing" that the system is separate from the environment that produces stimuli.
Humans arrived at consciousness through evolving larger and larger brains whose main goal is to model the environment and behavior of other individuals in order to predict what will happen next, in order to make plans for survival.
Panpsychism is then scientific, and perhaps inescapable, if we use these definitions. But, it doesn't mean that rocks and black holes are conscious. Only those things that undergo state change in response to stimuli and are complex enough to have models of themselves. There is no evidence that rocks have information-processing capabilities at all, and worms' brains are too small to have a model of themselves, however simplified.
If I can summarise the idea it's answering the question "how can consciousness
arise from unconscious matter (like the human brain)" with "it can't, matter is
already conscious (i.e. it has conscious experience)".
It's a very simple explanation, a very Occamist explanation. But at the same
time it is not _really_ an explanation. It just pushes the question back from
"how can complex matter like the human brain be conscious?" to "how can simple
matter like quarks be conscious?". But panpsychism does not even attempt to
answer the further question- it's simply ignored. That is, panpsychism says
that "matter is conscious" but it doesn't say _why_ matter is conscious, or
what exactly consciousness _is_ after all. It just passes the buck.
So it's not very different than answering the question with "because God
willed it". I mean, that's an answer too. It's a metaphysical answer- but so
what? Panpsychism is not metaphysical but it's not more explanatory than God.
Not until it explains _what_ consciousness is and _why_ quarks have it.
The phenomenon is real, as we can detect it with an intelligence and discuss (otherwise no one would understand me). But it is not necessary, nor provable that it drives our behavior or it feels and experiences events. It observes us, we observe it and this observer meta-identifies with a body that is aware of the idea, like a fixed point in calculus. So (imo) it is not a physical control, feedback or consciousness, but a symbiosis of physical and “observing“.
Anyway, I can’t see why we should dismiss it and move on, like some people suggest. It may be a key to something important about reality.
There's a riddle in the Gospel of Thomas:
"Make the inside like the outside and the outside like the inside."
Here's an interpretation: "inside" and "outside" refer to the inner
world and the outer world.
These share some qualities or aspects, both are known only
subjectively, and there's a similarity of form (there are e.g. red triangles
in both worlds) and susceptibility to will (we can move our bodies and
we can affect what we imagine, aka the faculty of imagination. Both of
these abilities present the same problem: what is intention and how does
it relate to will to cause changes in perception?)
Otherwise they are very different. The outer world is made of
matter/energy in various configurations and, while we don't (yet) know
what the inner world is made of, it's obviously very different than
matter/energy. Perhaps the main difference is that you can more-or-less
manipulate the contents of imagination (I'm using the word here as a
proxy for the whole of internal world) "at will" but the contents of the
real world obey a physics that dictates that you more-or-less have to
push things with other things to get anything in particular to happen
(please forgive my gross simplification.)
Now consider the subjective experience of an omnipotent, omniscient
being. If you can know and alter the "outer" world just as easily as you can
know and alter the "inner" world, wouldn't they effectively be the same?
In the context of a Gnostic tradition, I think it means that, when one
unifies the self with the Self the "real" world and the "imaginary" world
also become one.
What I mean by that is, everyone is incredulous that stones can have the same experience as humans, but consider it the other way around: humans have essentially the same experience as stones. Both react to the physical processes happening at their peripheries in dynamically-consistent ways, i.e., computing a quasi-deterministic function. The conclusion then is that since we are conscious, there's no reason that stones couldn't also be. It's a matter of degree, encoded in the complexity of that quasi-deterministic function mentioned above.
The main difference between us and stones is that, in order for our DNA to have arisen and propagated itself this far, it had to have a strong set of skills to survive over the history of the universe. It can literally create lumps of proteins that achieve that goal. For the body of the animal, being aware of its own history by constructing a self-consistent and closed narrative arc (the 'I' that we all contend and interact with) obviously has advantages, since it would be hard to operate in society without pre-defined and agreed-upon roles, which we assume as our identity.
So the fact that we experience reality as such, with our emotional reactions and persistent self-narrative, is an evolved trait which helped the human genome become the dominant life form on Earth.