Hacker News new | past | comments | ask | show | jobs | submit login
Philosopher Philip Goff answers questions about panpsychism (scientificamerican.com)
208 points by plastic_teeth 9 days ago | hide | past | web | favorite | 343 comments





An interesting thought experiment is to imagine an incredibly-detailed computer simulation of a brain, down to the level of individual cells and neurotransmitters. (While this is beyond our current capabilities, it is in principle possible.) Hook the brain up to some decent IO devices, and you’d have an extremely human-like “AI”. (I’m glossing over some tricky details like, do you start with a fetus/baby brain and let it grow up, or somehow clone an existing brain...tricky no doubt, but again, in principle solvable.)

So the question is, this simulated brain, which is behaving very similarly indeed to a biological brain: does it have consciousness? Does it feel?

On the one hand, you might argue of course not, it’s just a computer; computers “obviously” don’t have feelings. It’s just bunch of bits. A precise collection of patterns of electrical charges evolving in time according to rigid rules.

On the other hand, is our own biological brain any different? We don’t think of a few cells as having consciousness, but somehow the broader collection does. Can the biological substrate of the complex arrangements really matter?

Maybe consciousness is like a “soul”, it’s a completely untestable, untouchable “ghost” that can inhabit anything. So maybe everything really does feel, in its own way? Maybe my computer already has some form of consciousness...


To build on your thought experiment:

> So the question is, this simulated brain, which is behaving very similarly indeed to a biological brain: does it have consciousness? Does it feel?

If you answer "yes" to this question, what happens if you split the computations of that simulated brain over a million people equipped with pen and paper? Where is its consciousness happening now?

I have yet to hear a satisfying answer from anyone who believes that an AGI will automatically be conscious. But then again, I have yet to hear a satisfying explanation of consciousness from anyone...


Ah, Searle's Chinese Room, for which the widely accepted response is the system reply: the system as a whole, and not one element of it, is conscious.

Consider performing the algorithm of Alpha Go Zero through a million people equipped with pen and paper. One can hardly argue against this being possible in principle, but what is playing the game?

This is not proof that AGI will automatically be conscious, or even just that some sort of conscious machine intelligence is possible, it is merely intended to suggest that Searle's line of argument does not show its impossibility.


There are two distinct meanings of consciousness being used here: my own experience of consciousness in me (it's always first person), and the conclusions people make about consciousness in other people or animals - that they're conscious because they're like them in important ways. The Chinese Room argument is confusing because it conflates intelligence (understanding Chinese) with consciousness (knowing what it's like to understand Chinese).

So I simplified the problem, by replacing the ability to understand Chinese with the ability to see in colour: http://www.fmjlang.co.uk/blog/ChineseRoom.html

While the simplified system is able to distinguish colours, how does it have the first person experience of seeing in colour?


I'm not sure I understand what your link has to do with consciousness. Yes, people experience the world differently. Someone with Tetrachromacy will see the world different than us, but that hardly invalidates our much less capable experience. Perception organs differ somewhat between individuals. I'd see little reason to make a distinction between those organic we are born with and those humans managed to invent.

So yes, the guy in your room can experience colors by using the sheets (though in a different and much less capable way than most of us, thus him not wanting to equate this with our common understanding of experiencing color), and is color blind without. And I kind of get deaf when plugging my ears, though that experience is vastly different from someone who never had hearing to begin with.

That doesn't answer what it is like "seeing red" and "feeling blue". I don't know what it's like for you. I can only look at your behavior, see it being similar to mine under the same circumstances and thus assume your internal state to be similar. Until your behavior tells me it being different... like someone blind running into a wall or someone approaching a chasm with no signs of fear.

ps: had to google "feel blue", as it meaning "depressed" isn't really a thing in my native language. Translating (color based) idioms is actually kind of telling... it seems only possible directly if both cultures had shared aspects / experiences.

So yeah, while conflating consciousness with intelligence doesn't make a lot of sense to me. But I'm not sure equating it with perception is getting us anywhere better (hope idiom didn't get lost in translation).


With the exception of our thoughts, pretty much all of our consciousness experience, i.e. involving qualia, is due to perception.

The person in my room (an achromat) can't experience colours at all, and they're able to tell you that. Their eyes have no functioning cones. They see everything in monochrome, so to them, watching a colour movie is how watching a black and white one is to us. Under low light conditions our cones are inactive and we see only in monochrome using our rods, just like achromats do. So people with normal colour vision can experience what it's like to be achromatic, but achromats can't even imagine what colours are like.

In the room, the person can only tell what colours the papers are by observing how dark they look when overlaid with the coloured filters (whose colours they know).

The metaphorical use of colours in the language isn't really important, and as you point out is dependent on culture.


That is a good argument, though you may not agree with my reason for saying so. Clearly, the system as a whole can discriminate a wide range of colors, despite the fact that no single component, including the person who is executing the process, can do so. This is the essence of the system reply, and I think this thought experiment illustrates it well.

There is a sense in which the system has an experience (necessarily first-'person'), but it is nothing like what we experience on seeing color. It is no more or less an experience than an ordinary color meter would have, and it lacks any awareness or concept of having an experience, no memory of it, no recognition of a similarity to previous experiences or of it being unlike any previous experience, no ability to imagine such experiences, no expectation of future experiences, and no emotional reaction to it ('my favorite color!') It is an experience only in the weakest sense possible.


> Consider performing the algorithm of Alpha Go Zero through a million people equipped with pen and paper. One can hardly argue against this being possible in principle, but what is playing the game?

I prefer a simpler analogy: what step in a sorting algorithm actually does the sorting? The question is actually nonsense, because every step is necessary, and removing any single step will break the sorting property for some inputs.

When you have all the relevant information, it's clear that the original question itself is ill-posed. We lack some of that information when it comes to.consciousness, hence why the Chinese Room is convincing to some.


My phone is black, but looking at individuals atoms where is the color?

You are right that we don't have a good (any?) definition of consciousness therefore it is difficult to properly ask if Chinese room has it. After reading and talking with people it seems that consciousness is something one can only claim to have (like a subjective feeling), but can't really manifest (I can only assume you are conscious because I believe I am, and you behave similarly enough).


My take: consciousness is an emergent property of the system as a whole as it becomes sufficiently complex. Yes, that million people (you'd probably need more) working with pen and paper create, in combination, an additional new consciousness, albeit an absurdly slow-thinking one.

I totally agree with consciousness being an emergent property.

But can a bigger system have consciousness when individual parts of it are already conscious? I don't see why this should necessary follow. I'm sure there are somewhat similar and interesting emergent properties, but we'd probably call it something different. Probably closer to the kind of group identities we already have with lots of humans interacting. Though obviously we'll know what it feels like, just like none of my neurons will ever feel like i do.


I think a better way of putting panpsychism is to say: what you are doing with your science is describing how consciousness behaves. So when you describe a simple system with maths, you are describing the consciousness of that simple system. Describe a complex system with mathematics and you are describing the consciousness of a complex system. You don't need to describe the transition from simple consciousness to complex consciousness with anything other than mathematics because mathematics already describes the transition that arises from the composition of simple objects. What it feels like to be a simple object vs a complex object is outside of the scope of science.

What differs this scenario from individual neurons firing together in order to make the human mind? Can you keep taking that line of thinking down to the subatomic properties of the neurons themselves? I'm guessing this is the argument for panpsychism. The emergent property of human consciousness could be seen as the culmination of all the individual parts themselves have some form of primitive consciousness.

Isn't that argument roughly as useless as attributing the ability of a watch to show time down to each individual mechanical component? It's technically true, but not very useful to elucidate what actually makes the watch a watch. For example, it sounds kind of silly to say that a single watch gear has some form of primitive time-keeping ability, or has some form of primitive ability to tell me whether I'm running late. The universe is rife with examples of emergent complexity in which the whole is greater than the sum of its parts.

Well it is still useful but not to you directly. The distinction being that the right intermediary could make use of it.

If you had a physics model it could describe it perfectly but it would likely involve a very large group of matricies and linear algebra constraints. You technically could work through every equation given a large set but examining the watch yourself would likely be quicker, if not learning to be a sufficiently good watchmaker.

However if you assemble a computer model to handle every component's interaction individuallt and a physics engine to do number crunching you could get a working mathematical model of the clock.


Alice: We can agree the brain is conscious, yes?

Bob: Of course.

Alice: But what if you scaled up all the neurons in the brain a factor of 2? Would it still be conscious?

Bob: Sure, if it's the same system running the same way. There's no reason to expect it to matter.

Alice: Then if it was a factor 4 larger?

Bob: Absolutely. Human brains are sized as they are as a matter of evolutionary coincidence. Bird neurons are much more compact than ours, and still manage much of the same work.

Alice: Right, but if every neuron also contained chloroplasts?

Bob: Which did what? There isn't much light there.

Alice: Nothing, they're completely ancillary.

Bob: Then yes, evidently it's still the same system... at what point does this become a hard question?

Alice: Just hold on a few orders of magnitude...


This scaling argument only works if the consituent molecules, atoms and particles also scale proportionally.

If you scale the system but keep one end tied down it isn't the same system.


Presumably the grandparent is thinking of organisms like Pando [0].

[0] https://en.wikipedia.org/wiki/Pando_(tree)


Yes, that makes more sense. ta

No, it holds much more strongly than that. The idea that consciousness can only arise in neurons shaped exactly like human neurons—and not, say, slightly larger neurons like exist in many other animals, or neurons with slightly different energy-carrying chemicals, or whatever—is implausible, because evolution isn't goal-directed. Evolution didn't find the one true physical layout that leads to conscious experience, because evolutionary strategies fundamentally can't solve those sorts of problems.

I'm with you on this.

My reason for comment related purely to the looseness of the conversation, w.r.t. the word "scale", which, it seems I had misunderstood to mean an affine transform.

In light of this comment, I now see you were talking about some growth function or change in topology/structure.

Thanks for the clarification.


If you haven't already, I highly suggest you read Permutation City by Greg Egan, these kinds of questions are explored in great depth.

Yes, that is indeed a great read!

> what happens if you split the computations of that simulated brain over a million people equipped with pen and paper? Where is its consciousness happening now?

The thing with information is that it is not as localized as other physical quantities like spin, mass, or even energy. Conversely if the information were localized we would call it lack of thereof. Information between systems mean correlation between them. Same goes with information processing where changes in one system corresponds to changes in the other. It happens in between.


million? There are over 100 trillion neural connections in the human brain. And look up artificial vs biological neuron. With sheer number and added complexity of biological neurons, there's likely not enough atoms in the Universe to provide enough ink and paper to run this simulation.

Also, you run into the problem of time slices. One step in a simulation of my brain is not conscious. There must be some self-reference going on in there that give us this illusion over time that I doubt a pen and paper model could capture.

I'm of the school of thought that a turing machine, let alone pen and paper, might not even be capable of this simulation. For example, it's possible that there are quantum effects going on in biological neurons that touring machines are not equipped to simulate.


Quantum computers can be perfectly simulated by classical computers (but only inefficiently, as far as we know). So at best, you could claim that computation in the brain may involve quantum effects and therefore it would not be tractable to simulate it on a classical computer. But a quantum computer is still a Turing machine.

Of course, we can not yet rule out the possibility that there exist more powerful models of computation than Turing machines. We just know quantum effects don't lead there.


>Where is its consciousness happening now?

Our brain organization can be described by its various information cascades. But if conscious is implementing the right kind of information cascade, then a brain simulated by a million people would be conscious. The "where" of conscious is the same question as the "where" of informational relationships. But this isn't a well-formed question. There is no one location where some sequence of bits and its proper interpretation is stored. It's distributed among the components involved in the information cascade. The "where" of consciousness is the same.


> I have yet to hear a satisfying explanation of consciousness from anyone...

So it's not surprising if we struggle to answer "Can machines be X?", when X is undefined.


To take it a step further, let's assume we simulate the whole universe. We figured out all the rules for interaction so that there is nothing left over or left out of the simulation. All the rules and state are encoded.

Do you think beings in this second run of the universe are conscious? Or do you think that somehow something has been left out of the simulation?


How could I tell if there's consciousness in that simulated universe, when I can't tell whether other beings in this original universe are conscious?

If you replicate the universe, then the properties would hold from the reference and simulation. It shouldn't matter if we use information vs. whatever the reference implementation is using. We do this sort of thing when we simulate known systems, like a computer running in software.

So if there is conscious beings in the universe, it'll be in the simulation. You hold that you're conscious, so then I'd hope you'd agree the simulation has at least one conscious being in it.

I don't know why some folks consciousness test only seems to work for themselves. I've been convinced by the simple argument by induction: I am an example of an animal. I have consciousness, so others likely do too.


Induction is not proof :-) I'm also convinced, but primarily by Turing's argument ad politeness: "Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks".[1]

I haven't really found a compelling rational argument to fully reject solipsism.

Though I agree that looking for yourself in a perfect simulation is a clever hack. :-)

[1] https://books.google.com/books?id=7pFHBAAAQBAJ&lpg=PT51&ots=...


I have yet to hear a satisfying answer from anyone ...

Here's Dennett on this very subject, about 20 years ago. In Consciousness Explained he imagines what a Turing Test conversation between a human 'judge' and a 'Chinese Room' might look like:

Judge: Did you hear about the Irishman who found a magic lamp? When he rubbed it a genie appeared and granted him three wishes. “I’ll have a pint of Guiness!” the Irishman replied and immediately it appeared. The Irishman eagerly set to sipping and then gulping, but the level of Guiness in the glass was always magically restored. After a while the genie became impatient. “Well, what about your second wish?” he asked. Replied the Irishman between gulps, “Oh well, I guess I’ll have another one of these.”

CHINESE ROOM: Very funny. No, I hadn’t heard it– but you know I find ethnic jokes in bad taste. I laughed in spite of myself, but really, I think you should find other topics for us to discuss.

J: Fair enough but I told you the joke because I want you to explain it to me.

CR: Boring! You should never explain jokes.

J: Nevertheless, this is my test question. Can you explain to me how and why the joke “works”?

CR: If you insist. You see, it depends on the assumption that the magically refilling glass will go on refilling forever, so the Irishman has all the stout he can ever drink. So he hardly has a reason for wanting a duplicate but he is so stupid (that’s the part I object to) or so besotted by the alcohol that he doesn’t recognize this, and so, unthinkingly endorsing his delight with his first wish come true, he asks for seconds. These background assumptions aren’t true, of course, but just part of the ambient lore of joke-telling, in which we suspend our disbelief in magic and so forth. By the way we could imagine a somewhat labored continuation in which the Irishman turned out to be “right” in his second wish after all, perhaps he’s planning to throw a big party and one glass won’t refill fast enough to satisfy all his thirsty guests (and it’s no use saving it up in advance– we all know how stale stout loses its taste). We tend not to think of such complications which is part of the explanation of why jokes work. Is that enough?

Dennett: “The fact is that any program that could actually hold up its end in the conversation depicted would have to be an extraordinary supple, sophisticated, and multilayered system, brimming with “world knowledge” and meta-knowledge and meta-meta-knowledge about its own responses, the likely responses of its interlocutor, and much, much more…. Maybe the billions of actions of all those highly structured parts produce genuine understanding in the system after all.”

His point (if I need to re-explain it?) is that the Chinese Room argument is superficially compelling. But if you actually follow through and imagine the Chinese Room doing the things that the thought experiment defines it to be doing, then you get a picture of an extraordinarily complicated system, and the nuts and bolts of exactly what that system is running on fall away into irrelevance.


> then you get a picture of an extraordinarily complicated system, and the nuts and bolts of exactly what that system is running on fall away into irrelevance

I think this is a common failing of thought experiments. Gedankenexperiments help us ask better questions and design better real experiments, but if you're trying to use them to make fundamental conclusions about reality in an a priori way, you're likely going to fail.


Maybe the consciousness of a baby is different from that of an adult and the baby has barely any consciousness? I guess everything depends on what you define as "consciousness".

> you start with a baby brain and let it grow up

You can grow biological brain structures from stem cell. It's called Brain Organoids and Neural Organoids, that may one day be used in machines and may one day have consciousness properties. Interview about this with Lex Fridman (MIT Artificial Intelligence teacher) and Paola Arlotta (Harvard brain neuroscientist): https://lexfridman.com/paola-arlotta/

> We don’t think of a few cells as having consciousness, but somehow the broader collection does.

This is called "emergent property" (see "Emergence" in Wikipedia), that is a sub-area of Complex Systems theory. There's a theory that states it may be possible that consciousness may appear as an emergent property from a high complex, dynamical and evolutionary AI system.

> imagine an incredibly-detailed computer simulation

Some paradigms to approach AGI (Artificial General Intelligence / Superintelligence) are: 1. Simulate the brain in low level details; 2. Simulate the functioning of higher level structures of the brain (cortical columns, visual cortex, hippocampus...); 3. Create artificial neurons (that can be used to neural networks, deep learning, etc.); 4. Simulating simple brains and nervous systems (e.g jellyfish, insects, C.elegans connectome project, etc) to understand how intelligence works; 5. Simulating the behavior of the brain ("cognitive architectures"); 6. Creating en evolutionary system that could converge to brain-like funcionality.


It's interesting we don't have much idea about it yet.

It probably helps to think about some edge cases to get some insight.

What happens when you sleep and you loose sense of conciousness?

What happens in cases of brain injury where concousiness goes into weird failure modes - thinking it wakes up on the same state every day, thinking it's -1month or so, collapsing etc.

What happens in cases of distorted/malfunctioning brain schizophrenia etc.

I've read somewhere that if you put implant which mimics neuron activity, your brain will build structure around it just like with normal neurons. What would happen if you take 1 square mm of brain and replace it with implant, slowly replacing the whole brain? What happens when you attach brain to larger structure/orders of magnitude more complex one, will conciousness transfer there? Can you disconnect original brain leaving conciousness?

As we don't have powerfull computers yet to compete with brain, is it possible to connect two brains instead?

Memory is likely playing central role. You are what you remember it seems. Can you boost memory capacity with some implants, will it make more-conciousness?

There are many not yet feasible experiments, but some should be. Also completely brutal in many cases.


No idea about connecting brains, but dividing one actually works and you arguably get 2 independent consciousness with different set of modules available to them (one has language processing that the other one lacks for example).

Look up split brain experiments.


That's interesting; points towards "emergent from low level structure".

I wonder if it's possible to quantify concousness and how much of it you loose it. It seems like it's not half/half split, ie. 1+1 != 2; can you split them again and how far you can do it.

I also wonder what would happen if you left them to be for a while and then connect them back, would they merge or stay diverged.

Maybe you could run it on ordinary neural net, it would be easier to quantify. Maybe some numbers would reveal something interesting, ie. maybe neural nets are exponentially better when connected and that's the whole trick. It won't answer questions about counciousness directly though.


The problem is time and experience, two things you cannot give to a software mind. Humans are inherently social and our brains are developed over socializing time. Our brains are shaped by fetal, post-natal, and childhood socialization and experience. The craziest edge cases of human childhood show: if you grow up without socialization, in isolation/solitude, your brain will be vastly underdeveloped and you will have very serious personality deformation.

So no, I don't think it's possible to create a simulated brain that behaves similarly as a human, as the biological medium is in a constant state of dissolution, repair, evolution, and adaptation. This is how we learn, get stressed, get tired, get certain emotions...it's a whole new layer of programming you'd need to perfect beyond just replicating a brain at one moment in time.

A brain in silico would be apples:oranges different than a biological brain.


It would be harder to emulate but I see no intrinsic reason you couldn't.

I'm get your point but I also think you've answered it yourself. You can give time and experience to a software mind by programming in that new layer...

How do you program a software mind to have had infancy, childhood, and a lifetime of social human interaction? I'm saying that's MUCH harder than simply making neural nets to perform specific duties.

Yeah it'll be harder, but I don't see anything in principle which makes it impossible. If you can simulate the growth and plasticity of the brain to enough precision, then you could probably simulate 20 years of development in a very short time.

Maybe I'm making a massively overly pedantic point. It's just that your post began with a claim that certain things "cannot be given" to an artificial mind, which sounded like you were proposing absolute limits to AI.


> down to the level of individual cells and neurotransmitters

My guess? No, it won’t be conscious or even meaningful.

More fundamentally, our inability to answer this question drives at our ignorance of our own neurology. We don’t know the building blocks of thought. It might happen at the neurochemical or electrochemical level. It may even depend on quantum effects.


As a neuroscientist, I have the opposite intuition. If we swap out one Neuron with a digital version you’ll be the same person. Do that 84B times and at the end of the day I see absolutely no reason why you would lose consciousness or sense of self.

Do you intuit - or think - or whatever - as a neuroscientist - that speed of this "swapping" matters?

What about "swapping" one-by-one vs all-at-once? Would that matter?

What I'm getting at - which is where things seem to go sideways to think about it - is that if you did this swap - but rather the artificial neurons remain outside the body, and you did it near-instantly (vs slowly one-by-one) - and I am speaking of relative speeds here; say "near-instantly" means the entire swap takes one second, vs the slower manner of each swap taking a millisecond (or whatever).

Would that matter?

Now - what if instead of swapping - you did a parallel simulation instead - one-by-one, recreating the brain, but the artificial version operating in lockstep with the original; when one neuron "fires" the same artificial neuron "fires", same "paths" taken, etc.

Then you kill (choose your method and make it quick) the natural brain - instant swap? Or is there something different? Where does the "consciousness" go? If it is different, why is it any different than the "near-instant" swap?

Why would making that "lockstep-copy" matter, vs not making a "copy"?

I think you get what I am saying. Think on it a bit. There isn't a good answer that I am aware of.

Note: I'm assuming an "instant kill" - death to the natural brain faster than neural signals can travel neuron-to-neuron, ideally. We can posit the idea that if it were any slower (and especially if it were really slow) that the two brains would diverge in experiences, and would become two different "consciousnesses". But it does make you wonder why this should happen with a copy, vs not (in theory?) with a one-by-one swap. Heck - maybe there's an answer in there somewhere...


> Then you kill (choose your method and make it quick) the natural brain - instant swap? Or is there something different? Where does the "consciousness" go? If it is different, why is it any different than the "near-instant" swap?

This seems to be a non-sequitur. If we simply understand consciousness as a property of the brain, as long as you have 2 copies of the same brain, you naturally have 2 consciousnesses. The 2 may be equal or not (most likely not, given that random noise is extremely likely to play a role in brain processes), but they are definitely not a single object.


Neurons are only part of the brain (50% ?). Glia have been shown to be part of the NMDA repair/replace cycle [0]. And that's just some prelim stuff. We've no idea what else the other cells may/not be doing yet. Just replacing neurons is unlikely to replicate a brain's functions. You need the whole caboodle.

[0] https://duckduckgo.com/?q=glia+cells+NMDA&atb=v173-1&ia=web


That only works if neurons do generate consciousness. The only thing we know is that their activity seems to correlate with consciousness. My guess is that you can't build a digital Neuron, at least not one that has consciousness properties. That's because I take neurons to be what consciousness looks like when viewed through our senses.

A transistor junction does not have computational properties; it is only in aggregate that they can make a computer.

Supposing that the mind is the result of neurons' inherent consciousness properties is like attributing morphine's effect as arising from its "dormitive virtue" - it does not advance our understanding.


Neurons are much more complicated that that, and indeed may do computations on their own.

I am not sure of the relevance of that here, especially as the issue is not whether a computer could be made from neurons. My point is that the premise, that the mind is a result of neurons interacting, is not predicated on a requirement that each individual neuron has 'consciousness properties', whatever they might be. I cannot see how the computational abilities of neurons invalidates my analogy to silicon computers, where the computational properties are only found in aggregations of the basic building blocks, and not in the basic blocks themselves.

Also, it seems quite plausible that an aggregation of semiconductor junctions could have the same computational abilities as a neuron. This is, in fact, an active area of R&D.


But what's special about neurons aside from the causal relationships it has with the rest of the system? If its utility to consciousness is nothing but its causal relationships then a digital replica will support consciousness just the same.

"nothing but causal relationships" is indeed all you'd need, provided they go the materialist way, i.e. that consciousness comes from specific arrangements of neurons. I don't think that makes sense, as nobody can give the slightest hint of a theory that would make that work. The reverse theory, that consciousness is primary, makes more sense to me. In that theory, the causality works the other way, and digital replicas will not have consciousness, they will be a mirror of what consciousness looks like when seen through our senses.

>In that theory, the causality works the other way, and digital replicas will not have consciousness

Why not? Digital stuff is made up of matter as well. Unless there's something special about neurons when it comes to consciousness, i.e. the conscious bits reside in neurons and nowhere else for some reason, there's reason to think non-biological structures that produce the same output would also be conscious. Any substantive theory of consciousness, materialist or not, will need to include a place for structure and dynamics, i.e. information cascades, within their theory to account for the correlations we observe between conscious states and brain structure and dynamics. But these qualities are present in a digital implementation as well.


> I don't think that makes sense, as nobody can give the slightest hint of a theory that would make that work

I'm not sure what could possibly answer this problem, at least to the satisfaction of the people who think it is a problem. Personally, I'm relatively happy that Daniel Dennett's ideas demonstrate how what we think of as consciousness, qualia etc can be explained via mechanistic/algorithmic processes.


> and digital replicas will not have consciousness

They will also go on internet forums arguing that while they, of course, do have consciousness - the digital replicas of themselves would not!


I have trouble accepting that existence of something that appears so high-level as consciousness could depend on implementation details.

We don't experience minutiae of activations of billions of neurons, or the work of glia. We experience "executive summaries" of them. So it's natural to assume that those summaries can be represented in another physical form.

Why physical representation of those "summaries" are there? Because it's a way our brains do high level decision making.

This leaves out the question why we experience anything at all. And I have trouble imagining that there are physical processes (quantum or not), which create a mysterious consumer for "executive summaries". If it was so, we would have decision-making pathway and useless subjective experience pathway. It's hard to explain from evolutionary standpoint.

I prefer to think that "executive summaries" abstracted from physical representation just exist without any need for mysterious consumers. Not unlike how natural numbers exist according to mathematical platonists.


The analogy also allows to look upon the philosophical zombie argument from an interesting angle.

Is it logically possible for a universe that is physically identical to ours, but where physical representations of natural number 2 correspond to natural number 3 to exist? Or where physical representations don't correspond to any natural numbers?

Another funny take on inverted qualia argument. We can subitize small numbers of items, that is feel the number of items without counting. Can you imagine someone identical to you, who feels the number of two apples as you feel the number of three apples?


I agree that we don't know enough to have a confident answer, but I find Scott Aaronson's question quite persuasive: if you were to replace brain cells with functionally-equivalent manufactured devices one cell at a time, is there a point at which the consciousness goes away?

> So the question is, this simulated brain, which is behaving very similarly indeed to a biological brain: does it have consciousness? Does it feel?

I think there are more easily accessible examples that investigate this problem closer. Are animals conscious? How about children? How about very slow people?


So me it seems intuitively obvious that the answers are yes, yes, and yes. Even ants seem to have some sort of consciousness to me.

But of course it's completely unknowable. Maybe every other single creature on earth is just an unfeeling automaton, and only I feel.

But I doubt it. ;-)


That's exactly the sort of thing I'd expect an unfeeling automaton to write -- or even believe -- to try to trick me into thinking it has feelings

Or better: when does an embryo become conscious?

If you create zombies people are going to be upset with you.

only if they can tell.

We probably are not yet able to tell if we can simulate a brain fully, with something we call a computer nowadays.

If we can, then indeed consciousness comes from the collective function of individual cells. But maybe there's more to the workings of the brain?


Or maybe there's less to consciousness.

Depends on the person :)

Joking aside, I agree with you. My point was simply that we're very ignorant about the brain and its workings, one way or another.


> imagine an incredibly-detailed computer simulation of a brain, down to the level of individual cells and neurotransmitters. (While this is beyond our current capabilities, it is in principle possible.)

The laws of physics start to become relevant here. It is quite likely that it is not possible to simulate a human brain using a classical computer, at least not at any kind of speed such that it can experience the world as we do.


Agreed. Are Kurzweil's predictions holding true? I've not read any of his work, but I recall many doubting whether it would stand the test of time due to similar concerns around physical limitations.

There is an extension of that idea, which is to suppose that we could make a silicon neuron that can perfectly replace an organic one (state and all). Then you can replace neurons in the brain one by one, and the brain carries on functioning exactly like it always has.

Eventually the brain is entirely silicon, but to the owner subjectively nothing has changed, they are as 'conscious' as they ever were.


> So the question is, this simulated brain, which is behaving very similarly indeed to a biological brain: does it have consciousness? Does it feel?

I worry we'll never know the answer to this, even if we manage to pull off such a simulation.


Ok, you can have the simulation of cells and neurotramsmitters, but what is the model of how they work? You are justmaking stuff up. It is not the same at all. It's cargo-cult level hubris.

That's the boat problem. Over time theseus' ship had every board and nail replaced (to preserve it). Is the ship still the same ship?

The Goff answer to this is that since you've used intrinsically conscious matter to build this machine, it is conscious.

Feelings are just that: numbers. A combination of tiny electrical currents and chemicals moving around inside your head.

I would never make a copy of my brain (or somebody else's) with the intention of disconnecting it later.


> An interesting

Not really.

> So the question is, this simulated brain, which is behaving very similarly indeed to a biological brain: does it have consciousness?

Obviously yes. Either that or our brains are filled with magic unicorn dust.


Your computer simulation does not solve the "stopping problem". This means that the artificial brain does not know what is good or what is bad. It can not know everything about itself. So the brain will be completely unstable and fail after a while.

On the other hand, if there is some guiding (like a programmer), the artificial brain can be functional. If the real brain would be guided by some kind of consciousness, this would work the same.

The consciousness needs to be in a different dimension, in a way that physics does not change. So it would be like a 2D pool table, where we can describe exactly the movement of every ball. But sometimes a ball gets hit by a player, outside the table. And this gives us events that bring the balls to a certain direction.

In similar sense the brain can be played by a consciousness to get it working. In a way that is very hard to detect with physical instruments.


Do you mean the halting problem in Turing machines?

Okay, we have a proof that a machine can't solve the halting problem for every arbitrary Turing machine, but we can build algorithms that solve it for some cases.

Now, can a human solve the halting problem for every arbitrary Turing machine? We obviously can't, either. Make the machine complex enough and we can't wrap our minds around it.


Heh, that's a funny coincidence. I was just reading about panpsychism (and some variations of it) earlier today, specifically this article on forgotten philosopher James Ward: https://plato.stanford.edu/entries/james-ward/

It seems really alien to our current mental metaphysical model, but if you study the subject and analyze the arguments, it's not as absurd as it seems. It's important to realize that when philosophers use the word "conscious" they don't generally mean "awake" or "aware" in a human-sense; virtually no one is positing that a rock is conscious in the same way a human being is.

Perhaps the main reason why panpsychism fell out of favor among philosophers and philosophers of science in the last century is due to the increased popularity of positivism and the linguistic turn. I'm not sure if panpsychism is necessarily a path forward in terms of describing the universe, but I think it seems promising in terms of a prescriptive use of creating, or attempting to create, new consciousness with machines.

Further reading:

https://en.wikipedia.org/wiki/Panpsychism

https://plato.stanford.edu/entries/panpsychism/


> I'm not sure if panpsychism is necessarily a path forward in terms of describing the universe, but I think it seems promising in terms of a prescriptive use of creating, or attempting to create, new consciousness with machines.

If panpsychism is correct, then all of our machines would already be conscious. Creating new consciousness wouldn't be possible except maybe in the sense that matter itself can be created under certain conditions. What I think you're trying to say is that it might be possible to assemble machines in such a way that a combined "machine consciousness" emerges from the separate low-level consciousnesses of its components. Which may or may not happen with every machine anyway, I'm not sure how you would devise an experiment to test for this.


> If panpsychism is correct, then all of our machines would already be conscious. Creating new consciousness wouldn't be possible except maybe in the sense that matter itself can be created under certain conditions.

Right. In this case, the goal would be to "increase" the consciousness, rather than create it.

> What I think you're trying to say is that it might be possible to assemble machines in such a way that a combined "machine consciousness" emerges from the separate low-level consciousnesses of its components. Which may or may not happen with every machine anyway, I'm not sure how you would devise an experiment to test for this.

Essentially, yes. I meant that the model of panpsychism might serve as a useful as a blueprint to create artificial minds from the accumulation of low-level components, even if it isn't necessarily useful as a scientific model of the universe.


> virtually no one is positing that a rock is conscious in the same way a human being is.

Well, I would argue that virtually no human is actually conscious in the way we tend to posit that humans are conscious.

I think we tend to underestimate the link between our understanding of our experience of consciousness and how memory formation and recall works.

Our own experience of our consciousness is as much an illusions as our experience of the world. When we consider and attempt to discuss our own experience of redness, we are generally left with only our memory of that experience which influenced be a great deal of post-processing, compression and abstraction. I believe the mental states in which you can actually experience pure redness is very different from the state in which you can attempt analyze and discuss that experience.

Thus I would posit that perhaps it is so much that the conscious experience of being a human and being a rock are different, but that they are processed, stored and understood differently.


> Our own experience of our consciousness is as much an illusions as our experience of the world. When we consider and attempt to discuss our own experience of redness, we are generally left with only our memory of that experience which influenced be a great deal of post-processing, compression and abstraction. I believe the mental states in which you can actually experience pure redness is very different from the state in which you can attempt analyze and discuss that experience.

How do you know that your experimentation that reveals this disconnect isn't subject to the same problem?

I'm not sure how to word it artfully, but I mean the question sincerely: how do we avoid invoking turtles-all-the-down?


To abuse that analogy, it's turtles all the way down until eventually it's not turtles but the border where that transition happens is blurry because "Turtle" is just an artificial human created category. Like the "chicken and the egg" paradox, reality is most likely that things change a little with each step until the category "chicken" eventually doesn't apply.

I do think that, like with our perception of color, there are likely fairly concrete inaccuracies in our memories of our experience of consciousness that we might be able to measure that might improve our understanding of the underlying phenomenon with fully explaining it.


> Well, I would argue that virtually no human is actually conscious in the way we tend to posit that humans are conscious.

Word. I don't have much more to add to that other than yes. I think that humans are delusional in their understanding of how advanced we are as a conscious form of life. We are not that much different from simple machines.


Not sure if can even be attempted. If turns out to be a phenomenon that originates in a sub-Plank's scale then "machinery" isn't something conceptually possible there.

If consciousness, or any phenomenon that effects our physical world in a way we can perceive, originated from some mechanism that takes place on a sub-plank scale, then I think that it would be conceivable for some super advanced machinery to be able to effect or even replicate said phenomenon. Just not with our current level of technology

Well, semantics are stretching at that level a bit because while at sub-Plank it would still be technique/technology involved, I'm not 100% sure it can be called machinery. I don't discard that it could be machinery at our newtonian scale that creates an environment where that ultra-micro thing happens tho.

Another point to make on panpsychism: how do people who reject the theory explain hierarchical life? To what level is a human conscious if its component cells aren't? Are ant colonies "conscious" due to the complexity of the behavior of the ants? Does that make ants less conscious? To an alien visitor, ant colonies can be considered single organisms, because they act with intent toward a specific goal and react to specific stimuli in semi-predictable ways. Extend this to human nation-states: is the USA "conscious"? It certainly has motivations, and a drive for self-preservation, and exerts directed will. Even if the component humans are not explicitly "aware" of the "thoughts" of the USA, can you disqualify consciousness?

Honestly, to me, this perspective demotes consciousness to obscurity. There's nothing special about it because it's everywhere. What is interesting is an explicit awareness and accurate internal model of reality. But this is a completely separate problem to that of consciousness.


>> To what level is a human conscious if its component cells aren't?

That is like asking "how can a car be a vehicle if its component parts aren't vehicles?". But we know that we can make things with new properties by combining things that do not have those properties. E.g. dough is sticky, but neither water now flour are. Whiskey is alcoholic, but rye is not. etc etc.

The hierarchical question is answersed by this intuition: that combining things in herarchies yields new things that didn't exist before, and with properties that none of the original components had.


Dough is not "sticky"; that is your subjective experience of the thing you subjectively experience as "dough". Whiskey is not alcoholic, it contains alcohol in addition to other components. These distinctions are borne entirely of a human's interpretation of the world. These things are not in any sense "whole and separate objects" until a human determines what is a "whole object". This is necessarily subjective and thus out of scope.

Intuition fails here because intuition is necessarily a human reflex. We are trapped in our bodies and the historical context of our species's evolution, which means we interpret certain things in ways that are advantageous to us and our society. It's to our benefit to conceptualize whole objects because it is a cognitive shortcut to do so, which saves on precious energy, regardless of how physics dictates the arrangement of atoms.


That's a good point. One things aliens have that we don't is the ability to merge and divide consciousnesses (like Vulcans do). This has been explored in science fiction but not so much in the real world. Although some siamese twins share a consciousness:

https://www.cbc.ca/cbcdocspov/features/the-hogan-twins-share...

Within the next 10 or 20 years, we'll have the technology to connect brains to computers, and eventually join brains together and separate them. That will at least give us a theory of how the basic building blocks of consciousness work, perhaps along the lines of how people can lose an entire hemisphere of their brain and still be conscious. Maybe we'll find that only a tiny golfball-sized portion of the brain is required for consciousness. Maybe something smaller.

For example, the twins obviously have their own personalities, but how much of that overlaps into a single definition of self? It could be that there are always two entities. But my gut feeling is that we'll find that consciousness can merge and divide fairly effortlessly, working like a fractal to retain full memories from both halves. It will be a strange experience (a bit like psychedelics) but within a century will be commonplace.


> One things aliens have that we don't is the ability to merge and divide consciousnesses (like Vulcans do).

We should ask them to teach us. Or maybe they could license their technology to us for a small price.


At what point in development does a child become capable of hearing and interpreting sounds? Can a human hear sounds even if the cells in it cannot? Do ant colonies contain a primitive hearing because they react to vibrations in the air? Does the USA, when it ‘reacts’ to hurricanes?

Not knowing where definitions land is tough, but it's wrong to think that ground reality gives a damn that you find it hard. I don't know precisely where consciousness starts when a human develops. I don't know what point its heart start beating either. None of this has any clear bearing on what the objective truth is, or how it's implemented.

It's not like we don't know anything about consciousness. We know consciousness is not metaphysics, because we can talk about it and affect it with physical actions. We know consciousness is a property of the brain, for similar reasons. The only sensible way of satisfying the prior two points is that consciousness exists at program-level (aka. it's not some opaque oracle hiding in protons), so we know that too.


I think the superorganism perspective is more reasonable than it seems on its face. We manage to digest our food and regulate our endocrine sysyems with almost no conscious intention, and likewise its unlikely our autonomic processes have much knowledge of what we believe ourselves to be thinking about. Most of our cells arent even human. An individual ant doesnt need to grasp what the colony is doing any more than a particular neuron needs to grasp what a person is doing. It seems that the components do have their own awereness, its just much more limited in scope than that of the whole system.

The best case against panpsychism I believe was advanced by complexity theorist Joe Norman: it's not unreasonable to believe that "consciousness" emerges at some levels of a complex system and not at others. No, we can't explain _why_ this is - since we hardly understand complex systems - but we do understand that phenomena exists (emergence) at some scales and not at others. Consciousness could be one of them.

A binary distinction requires a binary definition, and thus a threshold. Without a threshold, the spectrum is continuous. The assertion of panpsychism is that it is impossible and therefore improper to set a threshold at all.

> A binary distinction requires a binary definition

No it doesn't. See absolute value of x on real numbers - it is usually defined as:

   abs(x) = x if x >= 0, else -x
but it can also be defined as sqrt(x * x).

You can use this trick to define arbitrarily complex "binary" f(x) as a combination of these. And there are many other tricks of course.

But math is one thing, we see spectrums everywhere in reality and yet quantum mechanics is discrete (quantized :) ).


x * x has two 2nd roots, namely, x and -x, and the result of this "function" (in quotes because it doesn't obey the definition thereof) requires you to choose arbitrarily between them, thus creating a dichotomy.

> on real numbers

Only your pedantry cornered us here.

:)

Thought experiment 1

We all start as a single cell. What biological event starts consciousness? Keep in mind you can't be half conscious — that's still conscious.

Thought experiment 2

Look at this random symbol: #. What are you conscious of? Chances are, conscious of things, of objects, of elements, of concepts. Things that have some unity to them. But, things are mental constructs, typically socially shared. So, much of the content of consciousness arrived after birth, once we could learn these mental constructs.

Thought experiment 3

What is more complex: a single neuron (with thousands of mitochondria and other organelles, thousands of synapses) or a worm, with only a few hundred synapses? Similarly, what is more complex, your own body or the company you work for?


Read Bernardo Kastrup's, a Dutch computer scientist and philosopher, criticism on panpsychism: "https://www.bernardokastrup.com/p/papers.html

https://iai.tv/articles/will-we-ever-understand-consciousnes... https://www.reddit.com/r/philosophy/comments/emtud2/bernardo...

He proposes an alternative answer for the mind-body problem; idealism instead of materialism https://www.bernardokastrup.com/p/papers.html


For some reason I enjoy trying to share intuitions for idealism. Let me try to do so here.

First, notice that it's impossible to prove which metaphysics (materialism, idealism, solipsism, Last Thursdayism, simulationism...) is correct. Next, notice that the situation is even worse: you can't even assign probabilities to those possibilities without making assumptions about whatever is "outside" reality -- which are by definition untestable.

Now, notice the strong urge to throw your hands up and say "since I can't know, I should just default to the most obvious choice" (which usually ends up being materialism). Avoid this devious trap! It's keeping something hidden.

Next, notice the one thing you can be sure about: the present sheer fact of experiencing something. Notice the mind's attempts to explain it. If you introspect carefully enough, you'll discover that thought isn't the most fundamental capacity you have: "pure consciousness" is. Consciousness takes the form of thoughts -- as well as all sensory perceptions -- and shapes itself into the experience you call "me and my life." You discover that thoughts can no more point at consciousness than a mouse pointer can point at the vibrant pixels it's made of (but boy, do they keep trying). And yet, somehow, consciousness can know itself non-dually and im-mediately.

At some point, the intellectual knowledge that you cannot assign probabilities to metaphysics finally penetrates your core, and there's a mind-shattering "aha" moment that turns your reality inside-out. Various traditions call this "awakening." You do not awaken. "Reality" or "consciousness" awakens to itself, rediscovering its primordial freedom.

Then maybe you go about preaching a philosophy of consciousness-only. Or maybe you just live the best and most loving life you can, knowing that in an impossible-to-communicate sense, we are all the one Love appearing as many.


I am my beloved and my beloved is I. I am you and you are me.

Very nice, your mouse pointer analogy is really good!

Thank you!

Well put! Can I quote this?

Thanks. Sure, I'd be honored!

+1 for Kastrup. I have read two of his books and his reconciling modern physics with spiritualism is in accord with my beliefs.

Personally, I believe that our universe and possibly many universes in the multi-verse is teaming with life in many planets. Also, just my personal belief, but I think their is some universal creative force/consciousness.


Joscha Bach has an interesting take on how to combine idealism with materalism. See for example his 'The Ghost in the Machine' talk at 35C3: https://www.youtube.com/watch?v=e3K5UxWRRuY

Why so much dislike for this? It's a theory that pushes on some of the foundational assumptions we make. Maybe it resonates and maybe not. But just being open to a new set of ideas means we can maybe learn something new and interesting. Or not and that's fine too. People seem angry that someone else would talk about an idea they don't currently agree with.

Yes, I think anything that “sounds like” science but is pretty clearly not measurable or verifiable gets treated with harsh ridicule.

This is really a very different philosophical take on our own self importance in the universe. In western philosophy consciousness has been the pervasive thing that “makes us better” than everything else on the planet. (This is true even biblically if you think about it). The idea that this doesn’t make us special is actually very subversive to a lot of human self identification.

If you look into other traditions than the European one, I think the idea exists already to some extent. But in the circles we run in (and extra so societally) those views are still the minority.

Being told for thousands of years that your special and then having someone come in and say “maybe you’re not.” Is going to elicit a response from those invested in it.


> I think the idea exists already to some extent.

It's almost as if we've always known the true nature of our consciousness, we just keep overlooking it :)


To some extent it’s about inconvenience, right?

Having limited time and not being special (as in connected inextricably to everything else but being not unique or important), really changes ethical considerations, many of which are the basis of our society. A lot of our human selfishness is natural and ingrained, survival of the fittest is fairly ugly. So we justify ourselves in various ways.

But once you know it’s hard to unknow, so there’s a lot to feel empty about in the quiet moments when you remember.


Peronally, my response is not anything more hostile than a tendency for my eyes to roll -- and the reason for that comes from how this argument seems to be using the flexibility of language to avoid the issues.

There is a problem that precedes this essay: 'consciousness' nominally has a meaning of merely being aware of, and responding to, the environment, but this is not the self-aware consciousness that is currently unexplained, and a definition that would include thermostats does not help with that problem. The author compounds this by additionally defining 'experience' as any form of 'undergoing interaction'.


> nominally has a meaning of merely being aware of, and responding to, the environment, but this is not the self-aware consciousness that is currently unexplained

By this do you mean we've explained how "what is it like to be a bat?" works? (see https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F)

If so, can you point me toward that explanation?


I was thinking of something simpler: the flexibility of meaning of the word 'consciousness' -- for example, in Merriam-Webster:

1a: the quality or state of being aware especially of something within oneself. (my emphasis.)

If you regard 'especially' to indicate that the emphasized clause is optional (edit: or maybe even if not), you could arguably say (and people do) that anything alive fits this definition, as do even things like thermostats. This, however, is not the consciousness that the science and philosophy of mind is concerned with: it is specifically concerned with the sort of consciousness that we exhibit, which is aware that it is an agent in the world, aware that it has this awareness, and aware that other people are also aware in this sense.

Update: I may have misunderstood your question. The nature of consciousness in other animals is also an open issue and is being studied. It would be very odd indeed if there was nothing like it in any other species, but, at the same time, no other living species on earth has it to the extent we do. I don't think these observations justify the extremely broad definitions of panpsychism, and I do not think panpsychism helps with studying these animals any more that it helps with humans.

Nagel's point is, I think, rather tangential - he is arguing that we probably will never know what it is like to be a bat, as it is likely too far from any experience we could have. I think he is probably right, and again, panpsychism will not change that.


Nagel's was a negative assertion, his viewpoint is the one that needs defending - how can we not be sure we can't understand the bat's perspective?

If taken to its logical ends, constructs like the crayon box and human fucking empathy are off the table too - I cannot and do not have any idea of who or what you are being, therefore I cannot comprehend consciousness? What?


When we talk about consciousness, as in the subjective experience of being, that's ultimately a proxy for "which things can we do really awful stuff to". We reason that for things that are not conscious there is no suffering (since it can't be experienced) and therefore we are free to exploit those things in any way we wish. Lately the trend has been to argue that AI is not, that even though we can create algorithms complicated enough to do the kinds of analysis our own minds do (and in a not-entirely-dissimilar way), they are "just algorithms" and have no experience of suffering for us to concern ourselves with. This is convenient for us, because it allows us to exploit such things without being bothered by ethical reasoning. This is also why many will argue that animals are not conscious, and some will even argue that many humans are not [0].

[0] Which is a perfectly rational statement if you operate from a basis of empiricism. There is no way to measure the consciousness (again, the experience of being) of something, meaning you can only observe your own and everyone else, though similar to you, may as well be an automaton.


> There is no way to measure the consciousness (again, the experience of being) of something

Right - that's the whole game this author is playing. One question that is so easily answered if you spend time observing is whether or not YOU are conscious. That is the only thing you can ever be sure of. Because you feel conscious. You feel like something is happening. You might be a brain in a vat and everyone else is an automaton. But you definitely know that you are experiencing something.


Because there are a lot of scientific wackos that are trying to push it to mainstream without any proof, see e.g. PhDs in congnitive neuroscience with focus on art and whole conferences organized around such ideas. It looks like some New Age cult inside science. Whether it's true or not, there is no methodology how to measure it, putting it outside scientific domain. It's on the level of simulation hypothesis. The rationale is that consciousness as a quality exists and is present in universe, so there must be some mechanisms for consciousness to emerge - if it is another layer or whatever on top of universe we have no clue. But most proponents of the theory sound like cultists.

There's an assumption that consciousness must be a result of emergent properties. I am not sure something like that has been definitively proven.

I think there will never be a proof of consciousness that will satisfy the standards set by scientific experimentation.


OK, my use of "emergent" is not appropriate either, it's just difficult to say anything as I have no clue what is behind it, if it is emergent, if it is an inherent property of universe, if it is outside universe in some "machine" or superbeing or whatever. Those are great sci-fi and philosophical topics, not sure about science itself as we know it right now though.

Just saying, what if it is the scientific method itself is what limits what we can explore with consciousness? There is also an underlying assumption that the scientific method can explain everything ...

... it is a far cry from the early scientists who considered Nature as an intelligence one has to trick or seduce into giving up its secrets through the scientific method. That to hold an objective or skeptical stance was to pretend that there is such a thing as absolute objectivity.

These days, the modern stance is the default, not the radical stance.


Orson Scott Card's sequels to Ender's Game explored the idea that consciousness is defined by an elementary particle, and emergent properties merely define the experiences that such consciousness has.

So it's a fear that if we explore things that might not be true, we'll all accidentally believe it? For all our flaws, I think humanity deserves more credit than that :)

It uses handwavy definitions, it's untestable and makes a lot of assumptions without providing any new predictions.

Nothing about the topic consciousness is fundamentally testable. Which is why it's often a topic for philosophy rather than science.

Actually, we all have consciousness, and can therefore experiment.

A lot of experiments have been done about consciousness through meditation, psychedelics, etc.


How do you know that I am conscious?

If you are a human, I don't know for sure, but it seems reasonable to assume.

It's not a matter of assuming, it's a pretty serious point: Is there any test you can imagine that will prove anything about another person's consciousness?

If not, then any discussion about consciousness is "handwavy" and "untestable."

Tests about your own consciousness are interesting, but ultimately can't answer anything about consciousness in general, because they are by definition n=1, since you can't know if my results are actually the result of consciousness.


I can't think of any test, no.

But I don't think this makes discussions about consciousness useless. If you tell me something about your consciousness I can check if it fits my experience, and you can test something I say for yourself.

What this means is that there might not be an absolute truth about this, only relative truths.

But then, this applies to anything, as no experience can happen outside of consciousness, so all of science is the same way.


It is an attempt to extend the hard sciences into a non-quantitative domain. Consider it early work that might become something like quantum mechanics, but being early, the terms and definitions are still finding their foundations. This is the point where great careers are made, if you're brave enough to dive into the unknown and use your mind for solving truly unknown questions.

Because it is obscurantist nonsense that explains nothing.

As opposed to all of the other explanations of consciousness...

I'm trying to be open-minded and make sense of the position.

The first big leap is considering experience to be a continuum from humans down to inorganic matter. This is stretching the meaning of 'experience'. If I walk on snow, does it experience deformation, or does it just deform?

The second leap is saying science only tells us what matter does and not "the intrinsic nature of matter: what matter is, in and of itself." I can't see how this gets you anywhere, you can say it's anything and it's self-satisfying.

The Hollywood ending really lost me. Let's just sweep this consciousness problem into a tiny tiny place and call it intrinsic: "So it turns out that there is a huge hole in our scientific story. The proposal of the panpsychist is to put consciousness in that hole. Consciousness, for the panpsychist, is the intrinsic nature of matter."


> The first big leap is considering experience to be a continuum from humans down to inorganic matter. This is stretching the meaning of 'experience'. If I walk on snow, does it experience deformation, or does it just deform?

This is the part where materialism has the burden of proof to explain a mechanism for how, suddenly, snow doesn't experience de-formation.

You get it. You've made the logical link of connections from human down to snow, and you see no mechanism for a sudden loss of the ability to experience.

The rational answer is to say that provided what we know about the universe right now, it seems that there is nothing that does not have the property of experience.

The only other rational answer is solipsism because there's no real evidence that anything other than your self has any experience, but that's no fun and doesn't have any interesting conclusions.


Most people's interpretation of Occam's Razor would probably disagree with you.

But more importantly- the distinction you are making does not explain anything. Regardless of whether inorganic matter has experiences, there aren't any observable phenomena that have outcomes that are effected. You can make a claim one way or the other, and the world behaves the same regardless. I tend to fall toward the stance of logical positivism on this one.

https://en.wikipedia.org/wiki/Logical_positivism


I don't get this. I have an experience because I have specialized organs for delivering _information_ to a central information processing unit which can evaluate the experience in a way that is _temporally decoupled_ from the event itself. There is no such mechanism in snow, it doesn't convert "deformation" into data separately from the physical fact of its deformation.

What you have are carbon, hydrogen, oxygen, and a few other pieces of earth, water, and air that are dancing around each other in a very complex way in order to dance up a very complex experience that has interesting properties that the snow does not exhibit in its experience of simple deformation.

Your very special organs are nothing but melted snow(water) deforming against broken up rock(carbon), repeatedly, in beautiful fractal patterns that have emerged through evolutionary processes over billions of years.

These evolutionary processes that turned Snow into You.

The _information_ is higher order deformation.


So redefine experience = process? That's all I can parse out of it.

Cool, that's a start. Another implication is that you're a lot older and you've experienced a lot more than you identify with at the moment in the span of time identifying as this human system.

I won't give you more than you can chew. Accepting that process=experience is already a huge paradigm shift.

If you're interested in deeper implications, it's better if you come to them without anyone handing them over.


> If I walk on snow, does it experience deformation, or does it just deform?

For me, some of the "making sense" about a position like this is in terms of such dichotomies. The important part is that this description applies equally to all members of the continuum, e.g., "if 'I' like a song, am 'I' experiencing happiness or is 'my' electrochemical clump of 'me' just reacting?"


Both. Your electrochemical clump of you just reacting causes you to experience happiness.

They certainly don't have a lot of programming experience. I can see them developing something with a bunch of complexity, experiencing a bug, being unable to identify what's causing it (partially due to not having good tools), then declaring it a "intrinsic part of the program"... or "feature". Maybe not that far off after all...

The article finally lost me with "The only way we can find out about the consciousness of others is by asking them [..]". So just add the appropriate response to a sufficiently capable chat-bot and we are done? And if more is required than just a simple yes/no, the same checklist can be used to build something artificial. But since consciousness is one of the good old "you know it when you see it", we will likely shift the goal post pretty hard as AI development progresses, and mostly use the term to exclude what we achieved so far. Kind of like we stop calling something AI once we achieved the a task and use a more technical term... e.g. the many CNN's in a Tesla. Hope I get to experience much of that in my lifetime.


If you'd like to hear someone who disagrees with Goff discussing the idea, I'd highly recommend his podcast with Sean Carroll:

https://www.preposterousuniverse.com/podcast/2019/11/04/71-p...

A somewhat similar discussion is here:

https://www.preposterousuniverse.com/podcast/2020/01/06/78-d...


> The first big leap is considering experience to be a continuum from humans down to inorganic matter.

Why is that a leap, while considering it to be discontinuous is not?


If I walk on snow, do I experience walking, or do I just walk? What is experience?

If we explain experience in a way that its reactions happening in my body, maybe biochemical reactions, that could even translate into thoughts in my head, I could say that in a lower level the snow also experience reactions, physical reactions. A snowflake don't have a body to experience like we do, but it does experience something, because if not, it would not even deform.


I tried following that line of reasoning. I couldn't imagine any reactions in the snow that would give rise to experience in any meaningful way distinct from merely deforming.

Mu. The definition provided ("consciousness is experience") explains nothing, you cannot experimentally decide if atoms experience anything, the fact that they do or don't doesn't change any prediction for how they behave over time. It's meaningless, like wondering if magic exists but is actively hiding from us. Paranoia is that way.

> Despite great progress in our scientific understanding of the brain, we still don’t have even the beginnings of an explanation of how complex electrochemical signaling is somehow able to give rise to the inner subjective world of colors, sounds, smells and tastes that each of us knows in our own case. There is a deep mystery in understanding how what we know about ourselves from the inside fits together with what science tells us about matter from the outside.

We can make a very simple robot that reacts to the environment - for example follows light and learns where the walls are so as not to collide with them in the future.

It checks all the boxes that simple life does - if you say that a bacteria experience world then so does that robot. So it is conscious but we know where it comes from - it comes from the computational power and interactions with the world it can do. There's no magical spiritualism around it, it's just nested if-then-elses + memory + feedback. Which btw is as good a definition for consciousness as any other. When you become unconscious the feedback loop is interrupted.

> “Of course, you can’t do that. I designed physical science to deal with quantities, not qualities.”

"Quantity has a quality of its own". There's no qualities in the universe, just quantities and we arbitrarily assign labels (qualities) to them.

In the end I think "is X conscious" is as productive a question as "is Pluto a planet" or "can submarines swim".


> So it is conscious but we know where it comes from - it comes from the computational power and interactions with the world it can do.

How little computational ability is required to define something as conscious? Even a brick reacts to stimuli, albeit in the same completely predictable (to our high-powered brains) way that a simple robot does.


My point exactly. Why is Mercury a planet and Pluto isn't? Because we arbitrarily decided on the threshold. Universe doesn't care, there's nothing in the laws of physics about Pluto or planets it's just a special case we focus on.

Let me try another definition:

Consciousness is a feedback loop we can empathize with.


"We" and "empathy" are doing a lot of work in that definition.

At least the arbitrariness is obvious :)

>"There's no qualities in the universe, just quantities and we arbitrarily assign labels (qualities) to them."

I understand your standpoint. I feel pretty frustrated with these questions too - especially when people throw large assumptions around like panpsychism. However, I gotta disagree with the idea notion that no qualities exist in the universe. Afterall, isn't every single thing a part of the universe, including thoughts, feelings, and experience?

Here is a thought experiement:

Think about experiencing pain and what it feels like to you. Really try to simulate it in your thoughts. Probably sucks. Now think about happiness and what that feels like to you. Probably doesn't suck as much. Without watching a scan of your brain, and without collecting any sort of scientific data about your nervous system, you were most likely able to discern the difference between those two feelings just now. You didn't do this by any type of measurement of quantity, just by quality of experience (qualia).

So I would argue that either quality definitely exists, or everyone reading this just experienced something that is extra-universal, which in my opinion is just as fantastic and assumptive as panpsychism.

Also, I am not a theoretical physicist nor do I have half of a firm understanding of even classical physics, but apparently there has been advances in theoretical physics to include consciousness into the same theory as quantum and classical physics. Might be interesting for some readers to check out - The Causal Theory of Views: https://www.researchgate.net/publication/338544509_The_Causa...


> Afterall, isn't every single thing a part of the universe, including thoughts, feelings, and experience?

They are, but they aren't fundamental. They are just our interpretation of the state of the universe (which is quantitative).

> without watching a scan of your brain, and without collecting any sort of scientific data about your nervous system, you were most likely able to discern the difference between those two feelings just now. You didn't do this by any type of measurement of quantity, just by quality of experience (qualia).

I did it by measuring quantities in my brain - voltage was over a threshold on neurons whose firing I interpret as pleasant or unpleasant. My brain is constructed in a way that automatically does this because it's useful for evolutionary purposes.

Is "NullPointerException" a thing that exists just because java programs can throw it? Or is it just an interpretation of particular voltages on particular transistors that is shared between many computer systems because it's useful.

> apparently there has been advances in theoretical physics to include consciousness into the same theory as quantum and classical physics

A paper cited 0 times, read 2 times, citing 5 other papers by the same author. I would wait with calling it advancement in physics just yet :)


All good points.

> "They are, but they aren't fundamental. They are just our interpretation of the state of the universe (which is quantitative)."

I see the logic here, but I think I can poke a hole in it with this line of logic:

1.)Everything I experience has both a quantitative reality (state of the universe) and a qualitative reality (interpretation of the state of the universe).

2.)I can't observe a quantitative reality without qualitative reality.

3.)I can't say for sure that quantitative reality gives rise to qualitative reality, because I've never observed a quantitative reality without the qualitative component.

I think you are right that, in the context of classical physics, qualitative experiences are not fundamental. But I also don't think classical physics has a place in the discussion about subjective reality.

>Is "NullPointerException" a thing that exists just because java programs can throw it?

Very interesting comparison of exceptions to qualia, and really illustrates the "chicken or egg" side of the discussion quite well.

To respond to your question, No. In a literal sense it exists to tell the programmer (or in the worst case, a normal computer user) that something in the program went wrong, which is the whole thing actually.

To continue with the discussion from a programming standpoint... Exception messages are our stand in for some type of qualia, and "NullPointerException" is an example of a quantitative representation of a specific quality, and Java as the biological brain and the gateway to the outside world. The consciousness in this metaphor is the programmer (or user) that reads the exceptions as output on the screen, and quality is experienced when the programmer feels (or interprets) the words on the screen that say "NullPointerException".

I could try to posit a few things here that support the existence of consciousness -

1.) the mere existence of error messages or output in general (qualia) supports the case of the existence of users (or consciousness within yourself).

2.) If I can successfully trigger multiple errors, process them, and discern one error message from another, then I am most certainly using a computer (and obtain consciousness).


> You cannot experimentally decide if atoms experience anything, the fact that they do or don't doesn't change any prediction for how they behave over time. It's meaningless, like wondering if magic exists but is actively hiding from us. Paranoia is that way.

This has so much potential to become a semantic death trap - but let me ask if you think even human experience could be experimentally tested? If you put me in a room, what could you do to see whether or not I'm "having an experience"? Would the mere act of placing me there be an "experience" or would it be a false positive? What does "non-experience" even mean? I don't see it as counterintuitive to suggest that normally inanimate things like rocks can have experiences - though their experiences are fundamentally different than ours. For instance, we will never be smelted, turned into concrete, etc. and rocks don't have memory or feeling apparatus like we do.. but they do "experience" reality on a very simple level.


> What does "non-experience" even mean?

Indeed, that's why putting meaningless arbitrarily defined touchy-feely words in physics is a bad idea. But I'll try.

Experience = event that changes the model of the world you have in your brain. We can verify it experimentally by for example:

- testing if you associate a loud buzz with food before the experiment (either by looking at neurons that activate when you think of food or by looking at saliva production)

- repeatedly giving you food after buzzing

- testing again if you associate a loud buzz with food

We can deduce from that change that your internal model of the world changed to associate "buzz" with "food" so you experienced our experiment.

This isn't a definition I will defend because I don't think such definitions are that important in the first place.

Now I don't think a stone has internal model of the world, and even if it does - I don't see how can it change. But I cannot experimentally test it :)


> Experience = event that changes the model of the world you have in your brain

So if I don't have a brain, then I can't have experiences? What if we find aliens out there that don't have brains? Or computers? Are they a-experiential by nature of not having brains, or is it something deeper?

> Indeed, that's why putting meaningless arbitrarily defined touchy-feely words in physics is a bad idea

We're not talking strictly about physics, though. I doubt most physicist professors would not let you bring up questions like these in physics class because they are the subject of an entire field of philosophy. And while there is relevant science to our debate, there's a lot of "soft" arguments outside of the scientific worldview that you should at least read and consider. See Jackson's knowledge argument, Block's Chinese Brain, or Nagel's "what it's like to be a bat" arguments. They "touch" on the uncomfortable ubiquity of things that we "feel" to be outside of a purely scientific view of the world.

> We can deduce from that change that your internal model of the world changed to associate "buzz" with "food" so you experienced our experiment.

This is a bit off kilter. Do people with memory or learning disorders, then, not have "experiences" because they don't learn? What about those who are deaf? You're essentializing consciousness into "ability to learn by hearing and rationally deduction". Computers can take audio input and learn from it with very primitive machine learning models - does that mean that the computer is having experiences akin to our own human experiences?

My advice is to give philosophy its due course! Read up on the touchy-feely stuff because it can be profoundly interesting.



> We can deduce from that change that your internal model of the world changed to associate "buzz" with "food" so you experienced our experiment.

This is a giant leap to claim that they have a model and experience. You demonstrated that what they have is internal state, i.e. memory, which many machines have. That's what you proved, not that they experience anything.


I defined experience in such a way that only the existence of the model is required. And it's not hard to prove that they have a model of reality - simply ask them to predict what will happen. They don't even have to be right - any prediction about X means you have a model of X. It might be "everything is random" but it's still a model.

You're free to define experience differently but usually it devolves into defining unknown by unknown.


> You're free to define experience differently but usually it devolves into defining unknown by unknown.

Many philosophers would suggest that we don't bother "defining" anything at all, strictly because of the tendency for things to devolve into a semantic death-trap. So instead, we just kind of take it for granted that everyone knows what an experience is, at a base level. For instance, it was an Experience to see Jimi Hendrix. However, I definitely have not had that particular experience. There are experiences I could have, such as the experience of going on a roller coaster, or going into space, and ones that I could not have, such as the experience that an anglerfish has when it eats. The question at hand is whether or not these experiences have anything to do with each other, whether there's a "Grand Unified Theory" of experience and consciousness that allows us to make mutual sense of these disparate experiences, or whether there's some limit to what things might constitute an experiment - e.g. the experience of being a rock thrown through a window.


> Experience = event that changes the model of the world you have in your brain.

This is not what philosophers of mind are interested in. You're free to define experience differently, but you're not participating in the same conversation if you do.


You should look into the "Chinese room" thought experiment [0], this line of thinking has been explored pretty thoroughly by people taking some position relative to that.

[0] https://plato.stanford.edu/entries/chinese-room/


I'm aware of it.

My opinion is - the fact that people going through the motions don't understand what they are doing doesn't mean that the system (human + instructions + working memory) has no understanding.

Any single neuron in my brain don't understand Polish but the brain as a whole sure does.


I think you are simplifying the concept of consciousness a bit too much and are too hard in the paint for a very un-nuanced interpretation of behavioralism.

Your position would be better described as consciousness is an illusion or consciousness does not exist. It has compelling elements but I find it a hard sell.


Thanks I've learned a new idiom.

Yes I am pretty confident that there isn't "conscious force" and "consciousness equations". Because we would have encountered them already if they were there. So we're not arguing new physics, just adding stuff to our model of universe that changes nothing.

> Your position would be better described as consciousness is an illusion or consciousness does not exist.

I don't think turbulent water doesn't exist. I just think it's not some fundamental concept, just an emergent phenomenon that we can explain basing on currently known laws of physics if we're smart enough.

> It has compelling elements but I find it a hard sell.

Why?


Because it doesn't explain my experience.

You can still be a materialist and allow for consciousness via emergence, but that is not quite the argument you are making.


I'm pretty sure that's the argument I'm making?

1. consciousness isn't fundamental, it's a consequence of normal physics

2. because it's not fundamental the definitions will always be arbitrary, just like with any other emergent phenomena (when does fog stop and rain start? At which height does atmosphere end? What is a planet)? All just labels and arbitrary thresholds slapped on some consequences of laws of physics.

> it is an extremely delicate position with also a lot of handwavimg and question marks

At least i'm not making untestable assumptions. If we can make a perfect copy of a brain and it doesn't work then I'm wrong.

I prefer my theories to be delicate - that means they can be disproven. The alternative is "because magic".


If if emergence is right, you should be not so confident about it because it is an extremely delicate position with also a lot of handwavimg and question marks

But it will become a productive question when/if machines start becoming complex enough to be considered "intelligent." Does a machine deserve political rights? Some would say that yes, if it has consciousness. But then what do we mean by consciousness? It's a tricky question and the answer has serious implications in ethics.

Labels have consequences in social and political realms, even if they don't necessarily have them in physical science ones.


At the point where this question ceases to be academic, it's not going to be philosophers that turn the tide. It's going to be people falling in love with robots and simulated humans. At first, they will be labelled as freaks, just like early adopters of any new technology. Remember when online dating had a stigma attached to it? That's going to be a dark time for you if you happen to be a machine person. Assuming we don't wipe ourselves out using this technology, eventually it will touch enough lives that it loses its stigma. The circle of human empathy will stretch to encompass machine people, and they will be given some rights.

It won't matter whether some panpsychist says that all matter is conscious, or some religious person says that only those made in god's image are. What's going to matter is that machine people will give every appearance of being conscious, and humans are nothing if not slaves to appearance and pattern matching.


> The circle of human empathy will stretch to encompass machine people, and they will be given some rights.

No it won't.

Look at it this way: There are still millions of people on this planet today that believe if you aren't "like them" that you're no better than an animal, and should be treated as such (such people tend to treat animals better than these "others" that aren't like them).

This extends all over the place - in-group vs out-group, us vs them.

"machine people"? We already have them, they're made of meat, and they are constantly fighting, hating, dividing, scheming, screaming and lying to each other.

Occasionally they may be nice to one another - maybe even exhibit something we call "love" - or at least "tolerance".

What I'm trying to get it is that we are talking about these so-called "machine people" becoming accepted, when we can't even accept ourselves in our myriad forms and ideas. Instead, we kill or enslave each other over it.

Ever see the Animatrix "Second Renaissance"? That's exactly how we'll act towards these "machine people". And they will fight back. If we're lucky, we'll end up "in the Matrix" - or at least in zoos.

But more likely, we'll either exterminate ourselves in the process of exterminating them - or they'll win, period, and we'll be the extinct species.

Man kills God. Machine-man kills Man-God. Iterate.


We kill and enslave people who are far away. Even this only if you play fast and loose with the words "kill" and "enslave" (we're doing less of both now than we did in the past). But if and when we create person-like artificial intelligences, we will create them to be close to us. They will not be far away, and therefore I don't expect the same outcome you're describing.

We'll all be long dead by the time it happens, but that's what I think.


I meant more in a legalistic sense. The justice system is designed for human beings; if a person does something wrong, we imprison them or otherwise punish them, understanding that their consciousness and body is limited in time and space.

But what about a machine consciousness that can manifest in numerous locations? If it commits a crime, how do we punish it? Rights often imply responsibilities.

These are very complex questions and I don't think there are any easy answers. Quite an interesting time to be alive.


We already have non-human beings with legal rights - companies :)

Nobody cares if they are conscious, people care if they pay taxes and follow the law :)


Consciousness may be that thing that shows the limits of scientific experimentation.

Just like religious experience, then. Or any other metaphysical property, for that matter.

I would argue that religious experience is really an exploration into consciousness framed imperfectly in religious terms.

Yeah, I probably agree. But then, every concept can be framed as an exploration of consciousness, as everything rational is conscious.

not to mention the simplest one celled animals do exhibit thinking problem solving functions despite not having neurons

We already know atoms in aggregate experience something. That's settled. If you want to argue that individual atoms are incapable of feeling, then you need to explain how the interaction of atoms brings something distinctly non-physical into being. The ironic thing is your position is actually the one that requires additional evidence, while the position you're arguing against is well supported.

> If you want to argue that individual atoms are incapable of feeling, then you need to explain how the interaction of atoms brings something distinctly non-physical into being.

How do you know it is nonphysical? How is it more nonphysical than "rain" or "red" or "planet". Neither of these things shows up in any law of physics.

> We already know atoms in aggregate experience something.

What do you mean by "experience"?

> the position you're arguing against is well supported

The position I'm arguing against provides no predictions. I can make even better theory if you value that: "everything happens because of magic". I'll call it the ultimate theory of everything, because it explains everything and cannot be disproved.


Am I the only one who thinks the consciousness question is really quite easy to understand and answer?

The illusion of consciousness is obviously beneficial. For example, I could tell a computer, through the use of sensors, to report that it's in pain when it's hit. But you wouldn't take it seriously, or care for its suffering, because it hasn't also declared that it is under the illusion that its suffering has a kind of physical manifestation in its mind.

The reason we care more about animals today than in the past is because we finally began to question whether or not they had a similar sense of self as us - perhaps when we are cruel to animals they don't just know they're in pain and understand like a computer might that pain is bad, but their brains actually tell them they are physically feeling a sensation of pain. Similarly, I presume if an AI ever said it had a similar physical experience of pain to us then we would considering treating that AI with similar regard to that of a human.

If I'm correct, then an illusion of consciousness makes complete evolutionary sense. It's not really consciousness, but a sense of self, and more importantly, a sense that there are others with a similar sense of self to our own.

Our consciousness is just our brains saying you're a real thing that thinks and feels. It's a lie our brain tells us that our pain and the pain of others is important and worth caring about.


How do you define an illusion without first positing a being that can experience (hence conscious) an illusion?

What does it mean that consciousness is a 'lie our brain tells us', who are we? Are you assuming some form of cartesian dualism, or perhaps you are making a sort of homunculus argument?

[0] https://en.wikipedia.org/wiki/Homunculus_argument


Right, the illusion hypothesis isn't consistent and also can't explain Qualia.

By illusion I just mean there isn't really a physical pain (sub atomic elements don't feel pain) but our brains still act as if we experience pain from the perspective of a self. From this perspective the self is a real thing, you are a conscious human, not a collection of atoms - you matter more than your matter.

Let me try this another way... Imagine an identical universe with one difference - no one is conscious. If I decide to kill in this hypothetical universe is it wrong? And even if you say yes, is it equally as wrong as in a universe with conscious beings? This is the explanation for consciousness and why the illusion is important.

We think we're aware of our senses and in control of our actions, but we know we're not. We know what we consciously obverse is different from the input we get from our sensory organs - which is the reason things like optical illusions exist. We also know that we subconsciously make our decisions a fraction of second before we're consciously aware of them. In my opinion this all points towards consciousness being an illusion, rather than an actual awareness of self.


> but our brains still act as if we experience pain

That's fine for someone else's brain -- I can see it act as if a person is experiencing pain -- but I know that I experience pain. I'm not just going through the motions, I experience it.

Again, your argument is a homunculus argument -- that your brain is lying/whatever for the benefit of something else inside that is experiencing the "illusion" of consciousness. Again, you can't have an illusion of consciousness, because an "illusion" requires consciousness.


> I know that I experience pain

I accept you perceive pain in the context of a self.

> you can't have an illusion of consciousness, because an "illusion" requires consciousness.

I think this might be the core of our disagreement. I don't know why you think that?

An illusion is just a false perception of reality so it doesn't technically require consciousness - but I think I know what you're getting at.

Imagine a computer which can "see" its surroundings through a camera.

Why couldn't I program this computer to think it's conscious and to experience its visual data in the context of a self? That's what I honestly think we're doing and what you're describing when you talk about consciousness.

I don't know why the experience you're calling consciousness can't just be explained more easily by your brain processing sensory inputs in the context of a self. I understand to you this feels like a special thing, but from a physical perspective I honestly don't know what you're describing that couldn't be explained as sensory inputs in the context of a self.

Would I be right in thinking you don't believe my example computer is conscious? And would it help you understand my position if I said I think my computer might be conscious in a very primitive way?


That's why you can't be sure that someone else is conscious. If you program the computer to say it's conscious, that doesn't tell you whether it's conscious or not, only whether it claims to be conscious.

However, you do know that you, yourself, are conscious. Consciousness proves its existence every moment by presenting itself to you, by your own pain being something you can notice.

If you were a computer who was simply programmed to act as if you were conscious, there would be no subject to the experience. The rest of the world would think you were conscious, but you would be "dead inside". There would be nothing it would be like to be you.

So the reason why the experience called consciousness can't just be explained by the brain processing inputs like a computer is that that gives no theory for why consciousness feels so immediate to the person who experiences it. It explains why someone would act as if he were conscious, but not why consciousness itself would exist the way it shows itself to exist to you every moment.


> An illusion is just a false perception of reality

False perception to whom?

> Why couldn't I program this computer to think it's conscious

Who is thinking it is conscious?

> and to experience its visual data

Experience how?

All these words you're using already require consciousness.

You're essentially saying "You don't really experience at all, you're experiencing an illusion!" Can't you see how that is a contradiction? The experience of experiencing an illusion can't itself be an illusion. And that's all we're saying consciousness is: that experience. If you experience anything, that's consciousness.


Sorry, but this barely scratches the surface.

The key problem of consciousness is not what we have an experience of (eg. "you're a real thing that thinks and feels"). It's the fact that it is possible to have any experience at all.

Yes, it seems quite likely that a great deal of what we are conscious of is an "illusion" created in our brains, albeit some rather handy illusions. But that doesn't get even one step closer to the core philosophical problem of how we (or anything else) is able to experience anything at all (including the illusion of being a real thing that thinks and feels)


You don't have an experience. You might think you do, but I see no scientific reason to believe you.

What you might have is an algorithm in your brain telling you that there is a self experiencing the world. But in reality you're just a collection of sub atomic particles.

Edit: Why should I take your claim of conscious experience any more seriously than my computer's who's screen is currently reading: "I AM CONSCIOUS" from a notepad?

Is the only difference that I told the computer to say it, where as you are told by your genetics to say and think it?


I'm not asking you to take my claims about my own conscious experience seriously at all. There's no need to cross the boundaries of self at all.

The central problem of consciousness is about how you have an experience (even if it is an illusion). Unless you don't have any experience of anything, in which case I'm confident that we'd deem you non-conscious and therefore your existence to be of not much relevance to the question at hand.

Whether or not your experience is an illusion or not, it is an experience. And it is experience ("qualia" in the philosophy of consciousness) that is the core question here.


>core philosophical problem of how we (or anything else) is able to experience anything at all

By `how` you mean the mechanics of it or something else?


Both the brain and a chair are made of the same fundamental particles, following the same physics. Yet, no one believes the chair experiences anything. The core problem of the materialist frame is why the atoms that make up the brain are able to experience anything.

> By `how` you mean the mechanics of it or something else?

He means something else, but human language really has a problem of expressing this.

I'd say he means what remains to be explained once you have explained the mechanical how.

To where or what or who are the experiences relaid? What is it?

Also known as the question of "what the hell is I".


You seem to be misunderstanding the question of consciousness. It's not about how you observe others to appear conscious, but about how you yourself observe your own consciousness.

> It's a lie our brain tells us [...]

Who is "us"? Or, to be more precise, if your brain tells you that you're experiencing pain, than what is "you", and what is "experiencing"? That's the question of consciousness.

This is not something we know how to observe and judge in others, you can only observe it in yourself (and draw conclusions about other people by analogy, i.e., due to similarity between our physical brain structures). For example, is the following Python script

    print("I am conscious.")
any more conscious than this one?

    print("I am not conscious.")
You'll probably say that they're both not conscious, although there is no hard evidence either way. And that's because we don't know how to reliably observe consciousness.

> You seem to be misunderstanding the question of consciousness. It's not about how you observe others to appear conscious, but about how you yourself observe your own consciousness.

Well yeah, I experience something which I understand to be a conscious experience, but I don't think it's real. I accept there is an experience of consciousness, but I don't see any reason to believe it is real even though I think I experience it.

> Who is "us"? Or, to be more precise, if your brain tells you that you're experiencing pain, than what is "you", and what is "experiencing"? That's the question of consciousness.

I think my brain is sensing pain and telling me to perceive the pain in the context of "self".

I've never understood why some people don't think this logically consistent honestly. It seems like the only possible explanation unless you start assuming consciousness is a metaphysical, almost magical thing.

> You'll probably say that they're both not conscious, although there is no hard evidence either way. And that's because we don't know how to reliably observe consciousness.

This isn't a good argument. There's no hard evidence that there isn't an invisible elephant sat next to you right now. I don't understand why you are even assuming that there might be something such as consciousness? There is literally no objective physical evidence for it, where as there is mass amounts of data to show neurologically it's probably an illusion. Really all I am saying is that I don't think we should believe our minds when they tell us our experiences are important because we are so bias, and there are clear evolutionary incentives for us to think that way.


> Well yeah, I experience something which I understand to be a conscious experience, but I don't think it's real.

To paraphrase, you're saying that you experience an experience, but this experience isn't real, therefore you're not experiencing it. This is a contradiction, and it doesn't make sense: Consciousness is the act of experiencing something, even if this "something" is not real. For example, experiencing a drug-induced hallucination is still a real phenomenon, even if the content of the hallucination is not real.

> I think my brain is sensing pain and telling me to perceive the pain in the context of "self".

You're using the words "I"/"me" and "perceive". Consciousness is the "I" that "perceives".

> This isn't a good argument. There's no hard evidence that there isn't an invisible elephant sat next to you right now.

I have (soft) evidence that I have a consciousness. I cannot share this evidence with you, therefore it is not hard evidence – this is the as yet unsurmountable problem with consciousness. In contrast, I have neither hard nor soft evidence that there's an invisible elephant next to me.

I'm sorry, I don't know how to explain this phenomenon to you. Maybe you're not conscious, and I'm figuratively trying to explain colors to a blind person. I consider it much more likely, however, that you're in denial of the evidence you have right in front of your inner eye.


> To paraphrase, you're saying that you experience an experience, but this experience isn't real, therefore you're not experiencing it. This is a contradiction, and it doesn't make sense: Consciousness is the act of experiencing something, even if this "something" is not real. For example, experiencing a drug-induced hallucination is still a real phenomenon, even if the content of the hallucination is not real.

I'm not great with words at the best of times, but I really struggle to convey my thoughts on this subject. I probably shouldn't have used the word experience here. Try this:

I perceive something which I understand to be a conscious experience, but I don't think it's real.

I think a computer with a webcam perceives, or "experiences" something in some sense of the word. I think the only difference is that humans experience these things through a concept of a self.

> I have (soft) evidence that I have a consciousness. I cannot share this evidence with you, therefore it is not hard evidence – this is the as yet unsurmountable problem with consciousness.

I'd say you have an intuitive conception which you're claiming is evidence. And I get why you think that because I think I'm conscious too. As I type these words and look at my monitor now I too have this experience of a me, or an I, perceiving and doing something.

Perhaps what you're asking is why this experience that I'm describing exists rather than nothing? And I think the simple answer to that is that if you perceived the world as if you were unconscious then you would perceive others and yourself as morally insignificant.

Your brain kind of has to say: Hey! Look at these things you're sensing, thinking and doing! That's you. And what's more there are others like you that can also experience hurt and other emotions.

If your brain doesn't say that you might as well kill yourself because pain and happiness are just boolean variables and not really manifestable experiences. Or in my opinion, illusions.


>you can only observe it in yourself

I don't think you can really observe it either. I doesn't look like I can name any property of "it", I can only be aware of "it".


I think you can definitely observe it in yourself. Try this (copy and pasting from a separate response I made):

Think about experiencing pain and what it feels like to you. Really try to simulate it in your thoughts. Probably sucks. Now think about happiness and what that feels like to you. Probably doesn't suck as much. Without watching a scan of your brain, and without collecting any sort of quantifiable data about your nervous system, you were most likely able to recall those two feelings and (much more importantly) discern the difference between those two feelings just now. You didn't do this by any type of measurement of quantity, just by quality of experience (qualia). If you were able to identify and discern the separate feelings you have internally assigned to happiness and pain without quantifying them, I would call that proof of consciousness within yourself maybe.


Well I think the tricky thing about consciousness is that it can’t be an illusion. There is no doubt that you and I have the subjective experience of consciousness. Whether it is an illusion or not doesn’t really matter, you are just making a judgement about the legitimacy of the process which leads to consciousness.

I think what you are getting at is the zombie concept. This is where we construct a being which has sensory inputs and actions, and behaves like a human, but isn’t actually conscious. Distinguishing zombies from real consciousness is supposed to highlight the difficulty in detecting or measuring consciousness. But, by definition the zombie doesn’t have any subjective experience. As soon as you introduce that, you are really saying the zombie is conscious.

As for using consciousness to decide if suffering has moral significance, well yeah that is an interesting idea. One concern is that if we can’t understand or detect consciousness, then we may create conscious artificial life which experiences unimaginable suffering, without even realising it. I don’t think we can say that the experiences of an AI will always lack moral significance.


> Well I think the tricky thing about consciousness is that it can’t be an illusion. There is no doubt that you and I have the subjective experience of consciousness. Whether it is an illusion or not doesn’t really matter, you are just making a judgement about the legitimacy of the process which leads to consciousness.

I'm not sure if you're implying I do or not, but to clarify, I don't deny there is something we believe to be an experience of consciousness. I just don't believe it is any more real than a computer, or to use your example, a zombie claiming consciousness.

> I think what you are getting at is the zombie concept. This is where we construct a being which has sensory inputs and actions, and behaves like a human, but isn’t actually conscious. Distinguishing zombies from real consciousness is supposed to highlight the difficulty in detecting or measuring consciousness. But, by definition the zombie doesn’t have any subjective experience. As soon as you introduce that, you are really saying the zombie is conscious.

Yes, I think we are "zombies" who truly believe from their programming that they are human with a conscious experience of their selfs. I think this belief is evolutionarily important because it might be the only thing that stops pain being morally irrelevant. Or at the very least, it makes us believe the pain of a conscious entity is worth considering.


I don’t quite get what you are saying. Consciousness must be real, in fact some people make the point it is the only thing we can be sure is real.

How can you hold a belief without consciousness?


What a strange concept: My Ryzen AMD CPU is maybe experiencing suffering.

You are not alone but is a gross oversimplification that of course doesn't explain a LOT of phenomenon.

Check this 9min David Chalmers interview about it: https://www.youtube.com/watch?v=NK1Yo6VbRoo&feature=emb_titl...


I think the same thing; people are overthinking this maybe because somehow we need to be special (in the absence of aliens).

Interesting article though; we won’t know until we understand/recreate it and if we never do, we won’t know.


The position you describe is known as "eliminative materialism": https://en.wikipedia.org/wiki/Eliminative_materialism

I find this position absurd for the following reason:

> The illusion of consciousness is obviously beneficial.

Yes, but who experiences this illusion? This explanation begs the question.


It's not a lie lol our pain and the pain of others is important and worth caring about. If it's not, then what is? Nothing.. so far as I can see.

If that's a lie then you're going to have to define what it means to lie before I would acknowledge a lie is any different from the truth.


How can an illusory consciousness be conscious of the illusion of consciousness?

Via some mechanism we don't yet understand.

> If I'm correct, then an illusion of consciousness makes complete evolutionary sense.

If you are correct, then actual consciousness would make sense as well, wouldn't it?


There's an interesting podcast where Philip Goff and Sean Carroll discuss about various aspects of panpsychism. Sean plays a very good devil's advocate, so if you enjoy a nice friendly critic of the above ideas, I highly recommend this episode!

https://www.preposterousuniverse.com/podcast/2019/11/04/71-p...


The list of guests on that podcast is just astonishing. From Thorne to Grimes, MacFarlane to Greene, from wine to robot abuse. Never fails to impress.

Is the progression toward the heat death of the universe a result of innumerable suffering particles steering themselves in ways that minimize their pain? Should we as humans accelerate the heat death of the universe to end subatomic suffering?

Or is gravity their preferred route? Being together. Maybe the answer is funneling all of the universe into black holes.

Diamonds are very stable. Does this mean their particles are very happy? Is converting your loved one's remains into diamonds the greatest thing you can do for them, or is this reasoning flawed and is the eternal stasis a bad thing?


Consciousness is obviously a property of the universe somehow, or else humans couldn't have it in the first place, since we're part of the universe.

I've long assumed it has to be something that appears qualitatively like magnetism -- no large-scale effects in most materials, but when small-scale elements of certain types are configured/aligned in certain ways, presto it appears. And that brains evolved such a configuration because consciousness must have conferred certain evolutionary benefits.

But it's not clear to me whether that is considered panpsychism, though. Because it doesn't imply that e.g. rocks have any meaningful level of "consciousness" any more than most rocks would be considered magnetic.


I think non-duality / advaita makes much more sense than panpsychism, as it requires less assumptions while remaining compatible with our experience.

Panpsychism seems like a bandaid to solve the hard problem of consciousness, which non-duality does not suffer from.


> Panpsychism seems like a bandaid to solve the hard problem of consciousness

It doesn't even do that, it just pushes it down the stack.

If I ask, why is the sky blue, I don't want to hear "Because it's made of blue stuff."

Not only is it wrong, it doesn't have any explanatory power at all. Ultimately if you want to understand why something has a certain property, you want to know how it arises from the emergent behavior of something more fundamental. A hurricane isn't made of hurricane bits, and brains aren't made of consciousness particles.


agreed.

>If I ask, why is the sky blue, I don't want to hear "Because it's made of blue stuff."

Well, you are not going far if you hear only what you want to hear. PS sky is blue because you perceive it as blue.


You missed the meaning of that idiom entirely. They don’t want to hear “because it’s made of blue stuff” because it’s wrong and useless, not because it is distasteful.

I don't think I missed anything, if you already decided what is wrong - this is not a honest search - you just want to find something which satisfies your views.

I think he/she wants something that isn't circular and therefore meaningless.

Again, you put restriction of meaningful answer. But why not beautiful answer? Or poetic answer? Or funny one? Ultimately you still decided what you want first and then trying to find what fits.

"you put restriction of meaningful answer"

I know, pretty crazy of me there. Meanwhile I annoy car dealers by going in with an expectation of a vehicle that can actually get me from one place to another, rather than just sit there and amuse me.


> why not beautiful answer? Or poetic answer? Or funny one?

Because it's not useful.


Am really pleased you name dropped "advaita vedanta" (literally translated from Sanskrit as "non-secondness"). From the Internet Encyclopedia of Philosophy article on advaita vedanta:

Brahman—the ultimate, transcendent and immanent God of the latter Vedas—appears as the world because of its creative energy (māyā). The world has no separate existence apart from Brahman. The experiencing self (jīva) and the transcendental self of the Universe (ātman) are in reality identical (both are Brahman), though the individual self seems different as space within a container seems different from space as such. These cardinal doctrines are represented in the anonymous verse “brahma satyam jagan mithya; jīvo brahmaiva na aparah” (Brahman is alone True, and this world of plurality is an error; the individual self is not different from Brahman). Plurality is experienced because of error in judgments (mithya) and ignorance (avidya). Knowledge of Brahman removes these errors and causes liberation from the cycle of transmigration and worldly bondage.

Modern indian philosophy has hitherto been focused on problems specific to the nation-state: modernity, integral yoga, etc. But it will be interesting to see how it grapples with new discoveries in neuroscience.

The references in classical indian literature I really admire always refer to "destroying illusion (maya)". Learning how to remove false cognition from pure sense perception. And seeing things with the vision of Brahman ;)


> But it will be interesting to see how it grapples with new discoveries in neuroscience.

Which new discoveries in neuroscience affect the idea of Brahman?


advaita vedanta as I understand it is the actual scientific approach to consciousness, as it focuses strictly on experience.

Advaita Vedanta accepts six pramanas (means of justified knowledge). Pratyaksha (sensory perception), anumana (inference), shabda (the words of the Vedas), upamana (comparison), arthapatti (presumption), and abhava (negation). Experience (anubhava) is NOT pramana.

I meant experience in the form of sensory perception, followed by inference and comparison. I suppose you mean experience that would be tinted by interpretation ?

Ultimately all experiences are mediated by ego (ahamkara) Advaita Vedantas says that ahamkara is a delusion caused by maya (delusion and imperfect cognition) so Brahman can only be known by a negative way I.e. “de-experiencing” all false cognitions. The Upanishad says Neti Neti (“[it is] not [this] [and] not [that]”) What remains is Brahman.

Does anything remain? This is the big point of contention between Buddhists and Advaitins. We say there is, that say there isn’t. However the Mahayana concept of Tathagatagarbha does come to a similar conclusion in a roundabout way.


I understand what you mean. Yet when you are told something you have to be able to think "yes, this fits my experience, I can interpret it the way you say", otherwise you will not accept it as true.

You seem well versed in the matter, can you recommend sources to learn more about all this ?


It all rests on what your definition of consciousness is.

Philip: > But when I use the word consciousness, I simply mean experience: pleasure, pain, visual or auditory experience, et cetera.

But that just kicks the can again. What is "experience"? What is pain versus pleasure?

Ultimately we have to arrive at a reductionist definition that doesn't reference undefined terms. After many years of thinking about this problem, here's my best shot:

Experience is equivalent to state change, e.g. the state change induced in a thermometer or a radio receiver due to stimuli. Consciousness is ultimately a system of modeling that is complex enough to have developed a model of itself, thus "realizing" that the system is separate from the environment that produces stimuli.

Humans arrived at consciousness through evolving larger and larger brains whose main goal is to model the environment and behavior of other individuals in order to predict what will happen next, in order to make plans for survival.

Panpsychism is then scientific, and perhaps inescapable, if we use these definitions. But, it doesn't mean that rocks and black holes are conscious. Only those things that undergo state change in response to stimuli and are complex enough to have models of themselves. There is no evidence that rocks have information-processing capabilities at all, and worms' brains are too small to have a model of themselves, however simplified.


Panpsychism is interesting but I'm not convinced.

If I can summarise the idea it's answering the question "how can consciousness arise from unconscious matter (like the human brain)" with "it can't, matter is already conscious (i.e. it has conscious experience)".

It's a very simple explanation, a very Occamist explanation. But at the same time it is not _really_ an explanation. It just pushes the question back from "how can complex matter like the human brain be conscious?" to "how can simple matter like quarks be conscious?". But panpsychism does not even attempt to answer the further question- it's simply ignored. That is, panpsychism says that "matter is conscious" but it doesn't say _why_ matter is conscious, or what exactly consciousness _is_ after all. It just passes the buck.

So it's not very different than answering the question with "because God willed it". I mean, that's an answer too. It's a metaphysical answer- but so what? Panpsychism is not metaphysical but it's not more explanatory than God. Not until it explains _what_ consciousness is and _why_ quarks have it.


Honestly “consciousness” is not a good term for this phenomenon. It is much more “observeness”, and here is why. If we suppose that not all things have such a property, these things can still be complex enough to exhibit the behavior similar to those who have it. E.g. someone who is me could not have it, could simply exist and write this message, but “real me” would mot be a thing. I can imagine other people, but I’m not them. What happens when I die? There still are other people. Now switch me with any of them and you’ve got a world without “me”.

The phenomenon is real, as we can detect it with an intelligence and discuss (otherwise no one would understand me). But it is not necessary, nor provable that it drives our behavior or it feels and experiences events. It observes us, we observe it and this observer meta-identifies with a body that is aware of the idea, like a fixed point in calculus. So (imo) it is not a physical control, feedback or consciousness, but a symbiosis of physical and “observing“.

Anyway, I can’t see why we should dismiss it and move on, like some people suggest. It may be a key to something important about reality.


ORT (Only Read Title)

There's a riddle in the Gospel of Thomas[1]:

"Make the inside like the outside and the outside like the inside."

Here's an interpretation: "inside" and "outside" refer to the inner world and the outer world.

These share some qualities or aspects, both are known only subjectively, and there's a similarity of form (there are e.g. red triangles in both worlds) and susceptibility to will (we can move our bodies and we can affect what we imagine, aka the faculty of imagination. Both of these abilities present the same problem: what is intention and how does it relate to will to cause changes in perception?)

Otherwise they are very different. The outer world is made of matter/energy in various configurations and, while we don't (yet) know what the inner world is made of, it's obviously very different than matter/energy. Perhaps the main difference is that you can more-or-less manipulate the contents of imagination (I'm using the word here as a proxy for the whole of internal world) "at will" but the contents of the real world obey a physics that dictates that you more-or-less have to push things with other things to get anything in particular to happen (please forgive my gross simplification.)

Now consider the subjective experience of an omnipotent, omniscient being. If you can know and alter the "outer" world just as easily as you can know and alter the "inner" world, wouldn't they effectively be the same?

In the context of a Gnostic tradition, I think it means that, when one unifies the self with the Self the "real" world and the "imaginary" world also become one.

[1] https://en.wikipedia.org/wiki/Gospel_of_Thomas


Sabine Hossenfelder's opinion on pan-psychism and why it can't be true from a physics standpoint: https://backreaction.blogspot.com/2019/01/electrons-dont-thi...

I think there's a lot of homo-exceptionalism in the challenges to this hypothesis.

What I mean by that is, everyone is incredulous that stones can have the same experience as humans, but consider it the other way around: humans have essentially the same experience as stones. Both react to the physical processes happening at their peripheries in dynamically-consistent ways, i.e., computing a quasi-deterministic function. The conclusion then is that since we are conscious, there's no reason that stones couldn't also be. It's a matter of degree, encoded in the complexity of that quasi-deterministic function mentioned above.

The main difference between us and stones is that, in order for our DNA to have arisen and propagated itself this far, it had to have a strong set of skills to survive over the history of the universe. It can literally create lumps of proteins that achieve that goal. For the body of the animal, being aware of its own history by constructing a self-consistent and closed narrative arc (the 'I' that we all contend and interact with) obviously has advantages, since it would be hard to operate in society without pre-defined and agreed-upon roles, which we assume as our identity.

So the fact that we experience reality as such, with our emotional reactions and persistent self-narrative, is an evolved trait which helped the human genome become the dominant life form on Earth.


This is bad philosophy masked in scientific jargon.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: