Hacker News new | comments | ask | show | jobs | submit login
Toward a Mature Science of Consciousness (nih.gov)
105 points by lainon 7 months ago | hide | past | web | favorite | 201 comments



John Vervaeke argues that much of our trouble with studying or defining consciousness is due to its fundamentally recursive nature.

Start with the definition: A conscious being is one which is conscious of itself.

Seems circular, but there is really no good non-recursive definition. This definition also seems to "ring" perfectly well with random people on the street.

It has been argued that any attempt at a non-recursive definition of consciousness includes things which most people don't consider to be conscious, see for example the paper titled "If Materialism Is True, the United States Is Probably Conscious".

Which leads to the problem of identification: how do we know that rocks are NOT conscious? Or, more interestingly, how I know the universe itself in its totality is not conscious? The universe certainly must have all the requisite "components" of consciousness, whatever those components are.

Once again, the process of identifying is recursive. A conscious being can only identify, with 100% certainty, its own consciousness. Any "other" thing could theoretically be a p-zombie or a Turing Test.

Corollary: No conscious being can know, with 100% certainty, that a particular entity is NOT conscious.

Could we have a science of a recursive thing? Perhaps, but only if we are willing and able to accept a recursive model with circular arguments.


>No conscious being can know, with 100% certainty, that a particular entity is NOT conscious.

You might be interested in a very short 2011 paper of mine, "A paradox related to the Turing Test", on page 90 here:

https://www.kent.ac.uk/secl/researchcentres/reasoning/TheRea...

I'll paraphrase the paradox here. Suppose you can magically detect conscious entities. I begin speaking to you, and you are obliged to periodically guess whether or not I'm conscious.

Here's what I'll do. Whenever you are guessing that I'm conscious, my entire dialog will consist of nothing but "Uhhhh..." over and over, until you change your mind and start thinking I'm non-conscious. Whenever you are guessing that I'm non-conscious, then I'll speak normally.

Since I AM conscious, and you have your magic conscious-detecting ability, you should eventually reach a state where you're certain I'm conscious, and you don't need to change your mind any more thereafter. But once we've reached that state, which only takes finitely much time (so for all you know my whole dialogue could've been a tape recording), thereafter I only ever say "Uhhhh..." A non-conscious machine could do that, so do you change your mind? If you do, contradiction, if you don't, contradiction.


It is possibly the reason why some people believe that consciousness requires some kind of Cartesian pineal gland which connect our physical brain to metaphysical consciousness. So we don't need to analyze behavior, we just need to look for the presence of the "gland".

The problem still stands though. We can't be sure that the "gland" connects to metaphysical consciousness and not to the metaphysical cloud processor.

Nowadays the "gland" is usually mysterious quantum processes.


I would reach the conclusion that you are conscious, but a bit obnoxious or you like playing games.


You're implicitly assuming that you yourself have free will and/or are too complicated for a computer to mimic. Because if not, then I could just be a computer which doesn't even hear the words you are saying, and is merely reacting to the words I would hear if I did have ears (which I can do because I have your entire sourcecode built into me).

The longer you meditate upon this very silly paradox, the more unsettling it becomes :)


100% certainty means either 100% prior or some completely waterproof evidence neither of which ever happen for any question. Still interesting result!


Why would you start with a circular definition? You can't use a word in it's own definition!

Consciousness: the degree to which something is self-aware.

Self-aware: being conscious.

There you go. /s


Consciousness: the ability to use the word consciousness.


Oh no, Google translate is conscious!


Not really. We have a different word, sentience, for self-awareness alone in English.

For example, monkeys are sentient. (presuming the self discrimination test is valid) Some birds are not known to be sentient. Are they conscious? Who knows. Consciousness is a misdefined grab bag of things.


have to agree i dont follow the logic in the parent post. simple refutation: there can be conscious beings which are not self aware (ex. somebody on ketamine).


It is a great weakness of the english language that being implies both the nature of a thing and its current state. In spanish, we have "Ser" (a permanent quality, an identity) and "estar" (right now).

You are(ser) a conscious being, but you are(estar) not conscious in that state.


Eh—I'm as much of a solipsistic skeptic as the next guy, but leaving aside questions about the fundamental knowability of truth (or brain-in-a-vat-isms) a precondition of consciousness is the possession of a mind. There is no evidence or causal factors to suggests that rocks have a mind. Your argument taken to its logical extents allows us to say anything is anything; so we're not really saying anything, because any word is any word and the whole process of communicating is gutted and even this rebuttal is pointless.


Sorry I drifted off into solipsism in the last few sentences. Didn't mean to. Just meant to point out that even rocks can be said to have any number of the "components" required for consciousness. If consciousness is expected to emerge from things that look like neurons and neural networks, then we can consider the molecules of the rock to be its neurons, and the electrons (or sound/heat/energy) travelling between molecules of the rock to be neuronal pathways, and then we should expect it to be possible for consciousness to emerge from that structure, atleast theoretically.


What is a rock though? Is it not just a construct? A piece of dead skin on the surface of a living rock? Is it not like saying a synapse is not conscious because it doesn't have a brain?

The issue is not with the op's argument, it is with the nature of (our) reality. We are not saying anything because we are talking semantics. This is about definitions, not reality. The differences between a rock and a human mind are semantic constructs built on sensory constructs. Semantics are all circular.. and we lack the philosophical construct to understand circular argument.


> What is a rock though?

What is the purpose of this question if a rock is anything but? I'm not arguing you're wrong I'm arguing that if there is a debate here it's circularly inconsistent because it presupposes that words have meaning by sole virtue of their own ineptitude. I see no proof that truth is knowable by a mind (in our world, anyway) so at some level I agree with you that it's circular on a multitude of levels but this argument goes no where and to argue the other side of it (that a rock is a mind) to me amounts to someone saying something along the lines of "words have no purpose" it's even worse than circular dependancy it's circularly _inconsistent_.


Hence my point that our reality is the issue, rather than the argument. If we are using language which is inept, ie. unable to capture the reality, then how is cementing the meaning of that language supposed to help?

Are we supposed to limit our understanding of reality by limiting the questions we ask of the language with which we articulate it?

Or another way, how can we ask what consciousness is for a person or an animal or a computer, of we cannot answer it of a rock?

It is not at all to suggest that words have no purpose or are meaningless. Words are currently our best definition of consciousness.


Your first few paragraphs were quite enlightening, thank you for writing that.

However... the recursive nature of consciousness doesn't stop us from saying that rocks aren't conscious. You still need a medium that supports recursive operations of the necessary depth and complexity. Rocks don't seem to have such characteristics.


Well we know very well that rocks have molecules arranged in complex patterns, and that energy transfer occurs between these molecules in the form of electrons, or sound/heat/energy. Nothing too different from a neural network on a fundamental structural level.


for what it's worth, vervaeke is best known for his work on relevance, and in particular for demonstrating how the "turing test" (currently the most accepted "test" of consciousness) is insufficient

http://www.ipsi.utoronto.ca/sdis/Relevance-Published.pdf


I always thought the Turing test was more performance art than scientific enquiry. Meant as much to illustrate the tenuous nature of consciousness as test computer literacy.


Link to the referenced paper, "If materialism is true, the United States is probably conscious", which I found fascinating. Thanks for mentioning it!

http://faculty.ucr.edu/~eschwitz/SchwitzAbs/USAconscious.htm


The depth of recursion seems to be limited. And for me personally, the limit is 2 (or is it 1? depends on how you count it). Which is not a true recursion :)


I found that Dan Dennett's "Consciousness Explained" really resonated with me. His thesis is (as best I can condense a whole book to fit in an HN comment) that consciousness is essentially an illusion produced by the brain as the best model it can construct of the sensory input it receives. But the model actually lags behind the present, and its history can be rewritten when new information comes in. Dennett describes it much better than I do, and provides a lot of experimental evidence. If nothing else, the experiments he describes will make you see your own consciousness in a new light. I recommend reading the book.


I've always felt that Dennett has side stepped the question here. The 'hard problem' here is to explain how a set of physical processes give rise to consciousness or sensory experience at all. In other words, why the lights are on.

As far as I can tell, consciousness is literally the one thing in the universe that can't be an illusion. Even if, in the extreme case, we're brains in a vat, etc.


> As far as I can tell, consciousness is literally the one thing in the universe that can't be an illusion. Even if, in the extreme case, we're brains in a vat, etc.

Cogito ergo sum!

"I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I too do not exist? No: if I convinced myself of something then I certainly existed. But there is a deceiver of supreme power and cunning who is deliberately and constantly deceiving me. In that case I too undoubtedly exist, if he is deceiving me; and let him deceive me as much as he can, he will never bring it about that I am nothing so long as I think that I am something. So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind."


The 'hard problem' here is to explain how a set of physical processes give rise to consciousness or sensory experience at all. In other words, why the lights are on.

But subjective experience isn't the light being "on" or "off". Subjective experience is a messy, fuzzy collection of stuff. People do things without intending to, people believe things they've automatically were intentional and reverse. The zone between waking and sleeping isn't set, etc. etc.

If one does the "blind spot" experiment, one realizes one's vision is constantly confabulating to make for vision's imperfections. And so-forth.

So there's no reason no to expect that our experience and memory of consciousness isn't a incomplete reflection of what's actually happening.

Lots of people strongly believe they have a soul. That's clearly anti-materialist and has no basis in reality but it seems like a quite natural illusion (one that the "hard problem" ideology resembles). The "subjective experience exists" statement, if it's taken to mean a single, uniform fabric of experience exists, seems unjustified given the overall messiness I mentioned and if the hard-problem claim isn't this, what is it?


"Subjective experience exists" doesn't imply there's a single, uniform fabric of experience, nor does it imply anything about how subjective experience relates to an accurate perception of the world. Even if I hallucinate something that doesn't exist, there's still a subjective experience that I have of that perception.

I have no idea if anyone else subjectively experiences anything; everyone else might just be philosophical zombies. I still want to know why I have any subjective awareness of the world, though, and I don't see any way that that question can be answered via any sort of objective measurement or scientific process. You can study my eyes and my brain to explain why I can accurately perceive the color blue, but nobody can truly explain what it's like to see the color blue.


I'd take your argument further even. We could be mistaken about every experience we've ever had. Let's take the brain in a vat example, in which none of our sensory experiences accurately model reality.

This has no bearing on the hard problem of describing consciousness or how organic material (maybe silicon some day) gives rise to it. It also doesn't indicate how a physicalist picture of the mind could, in principle, capture the phenomena.


You can be mistaken about the content of your experience, but I can not conceive you you can be mistaken about the fact you're having one.

Dream, delusion, reality, drugs, schizophrenia, completely fabricated sensory manifold, whatever; the content does not matter for purposes of this question. Is that brain in a jar experiencing something? Is it experiencing at all?

That's consciousness.


But wouldn't it mean that literally all living beings have consciousness? That consciousness is not a binary thing, but there are many levels of consciousness.

Humans always liked egocentric views -- "The Earth is the center of the Universe". Or another version: "Humans are superior because they have consciousness".

If there are many levels of consciousness, one may imagine some extraterrestrial super intelligent beings that are many levels above humans, that would think of humans as non-consciousness beings -- like we think of many animals.

Now, if we go down instead, we can say that all animals with brain are conscious (just as not as we are), can't we?


Yes, it does.

In fact, I submit that it's the duty of anyone saying that the line separating "conscious" and "not-conscious" animals is here or there has to justify not just why that's where they drew the line, but that there is one in the first place.


can not conceive you you can be mistaken about the fact you're having one

The experince of having an experince can be wrong. If the existinece of experience is the content of the experience - and you have written that the content can be wrong. If one mistaken about the very fact of experience, is that valid an experience too?

https://en.wikipedia.org/wiki/Anton–Babinski_syndrome


AB syndrome doesn't seem any more significant to the issue here than a synesthesia or schizophrenia. That their reported experience has no correspondence to the outside world isn't relevant to whether they're having an experience and its nature.


Yes! Much better said.


We could be mistaken about every experience we've ever had[...]

This has no bearing on the hard problem of describing consciousness or how organic material (maybe silicon some day) gives rise to it.

This is assumes beliefs are entirely separate from experiences but that's counter to the complex experience characteristic of consciousness - one can experience believing many things, including contradictory things if one isn't careful. It's narrative that claims to make things uncertain but actually, implicitly maintains a neat, certain rational narrator on top of sensory experience (sensory experience, right or wrong, is spoken of a mere data evaluated by a separate rational actor). However both neurological research and reflection shows the beliefs, consciousness, experience and so-forth are all meshed together.


You're trying to make "consciousness" mean far more than many of the rest of us in this discussion, and then arguing against the definition you're the only one using.

We're not talking about the content of our experiences, and as long as you keep insisting that's somehow part of the picture, we're never even having a conversation. "Consciousness," as we're using it, is the simple fact that you're experiencing at all. Nothing else. Not the content. Not the feelings about the content. Not the "neat, certain rational mirror on top of sensory experience". Just the raw, unfiltered, unqualified fact that you are having an experience at all. Full stop.

Everything else is detail, and can very easily be wrong. The fact that there is an experience happening in the first place, can't.

Otherwise, please explain to me how I can be mistaken about the fact that I am experiencing. Not what I'm experiencing — that I'm experiencing.


From what I understand of anesthesia, you can be in a state where you are experiencing (and can even have slow conversation with the doctor) but you don't remember it. If asked, you would say you were unconscious through the procedure, but a video of the interaction would show that you were there with some access to both your memories and the present moment. When dosed properly, the anesthetic blocks the formation of new memories. Somebody very much like you and inhabiting your body was there and experiencing enough to respond verbally.

If you interact closely with people having psychotic breaks and other mental disorders, you can witness someone mistaken about the boundary between themselves and the environment. If you can wrap your mind around that level of malfunction, I think you will begin to appreciate that the very statement "the fact that I am experiencing" is begging the question. The abstract concepts of identity, self, perception, and memory are all intertwined and more nebulous than is comfortable to think about.


My impression here is that intelligent people tend to over complicate this distinction. Consciousness is what separates you from a rock. It’s the fact that you have an inner life at all. That the world shows up for you. The points you make are interesting, but orthogonal to core distinction of what consciousness is. EDIT: as you point out, your examples raise interesting points about personal identity over time, throughout space, etc. These, I argue, are just additional and separate distinctions.


You're still talking about the content of the experience. Remembering it is immaterial. The content is immaterial. That you had an experience of any kind — real, false, remembered, not — is what matters, not what it's an experience of.


I still claim you are begging the question by assuming there is a clear, binary distinction. If you assume dualism and demand an explanation for duality, I cannot satisfy you when my perspective is that dualism is an illusory concept. It is a bit like asking a biologist to explain postmodernism.

Let me borrow a helpful phrase from another reply above: inner life. I recommend some meditation on the possible inner lives of a whole spectrum of creatures. I take the liberty of assuming you would grant humans the richest inner life. What inner life exists in a great ape, dolphin, octopus, monkey, dog, cow, bat, field mouse, tarantula, lobster, honey bee, earthworm, clam, coral, or amoeba? Within any one of those species, how does an inner life vary between a zygote and a fully developed and experienced adult? What about members of those species who have a sensory disability and go their whole lives with a reduced complement of sensory organs or sensory nerves?

Consciousness may be merely an abstraction we place on a typical complex of sensory processing and introspective abilities in our minds. People may have more or less experience with alteration of this complex and form other abstractions such as unconsciousness, trance, daydream, hallucination, blackout, or catatonia.

Outside weird scenarios I mentioned in my previous post, we can only discuss the subset of these experiences which we remember. We don't even know what other qualitative experiences we may be having throughout our lives which are not normally encoded to memory. At risk of returning to my previous topic which you discard as mere "content" of awareness: I would argue that our introspection is just as vulnerable to mistake, confabulation, and delusion as our awareness of the external world. It may be a convenient fiction, much like our visual system papers over the blind spot in our retinas and the motion blur of rapid eye movements.


Apologies for the delay in responding. This week was my hack week, and I had Life stuff happening, too.

> If you assume dualism...

Which I don't. My position on this stuff is in the neighborhood of panpsychism or Objective Idealism, but not quite either, and both of which are very much monist notions.

If you weren't accusing me of holding a dualist position, could you please clarify what you're saying here?

> I recommend some meditation on the possible inner lives of a whole spectrum of creatures.

I've actually spent a fair amount of time contemplating the question of the nature of the inner lives of things not-me, thanks. I can't speak to what that experience might be like. Epistemic asymmetry is a thing. What I don't spend much time contemplating is that they have one.

It's interesting, though, that you bring that up — bats, specifically. Please read Nagel's "What is it like to be a bat?" if you haven't already. It's one of the first papers I'm aware of to meaningfully articulate this question. Dennett, himself — the thinker whose oeuvre, and particularly "Consciousness Explained", frames this discussion — called it "the most widely cited and influential thought experiment about consciousness."

So, for clarity, my position goes like this: I have an experience of the thing I mean when I say "consciousness" — this "inner life", which, until I'm convinced is somehow a delusion, I will continue to regard as the only direct experience I have. On the principle that solipsism is bullshit, I assume that the other beings like me with whom I share the world also have this 'inner life' thing going on.

Additionally, I see in the world other things which are clearly "beings", but which are, to varying degrees, not like me and the other beings I assume have inner lives. What about them is different, such that they wouldn't have this experience? Is it language? Many have that. Tool use? More than a few. Warm blood? (See: "Do Fish Feel Pain?")

It is, I submit, most parsimonious to assume consciousness of some form on their part, as well. When someone suggests otherwise, they're making a positive assertion, not merely that this imaginary line between "conscious" and "not conscious" goes in that specific place, but that there is a line in the first place, and must justify both.

> I would argue that our introspection is just as vulnerable to mistake, confabulation, and delusion as our awareness of the external world.

Of course it is. I don't think that's even in question. But that's actually my point: when you're introspecting, that's "you", there, in that moment, on that ride. Whether the things you're introspecting about are materially reflective of their antecedents, whether you're thinking clearly or biasedly, whether even your entire narrative in that moment, if not always, is just so much self-serving bullshit, does not matter. You're still introspecting.

That's what I mean when I say the content doesn't matter. Whatever I'm experiencing, true or false, I'm still experiencing.

The question no-one can answer to my satisfaction — all this hand-wringing about its being irrelevant, or illusory, or even incorrect notwithstanding — is how, if the only "things" that matter are reducible to observable, measurable, falsifiable phenomena, it should feel at all.

Because the thing I most want those tools to explain is how I can be here, in existence, having the experience of being here questioning my existence at all.

There is precisely nothing in this kind of reductive ontology that can account for the fact that there is a qualitative nature to my being at all. And yet it is the most direct experience I have. Not what I see, that I see. And, yes, the emotional associations I have with what I see, upon seeing it. Of course that comes from a different part of the brain. Yet I feel that too.

That's the question I want answered — the very thing being hand-waved away.


I suspect I cannot really help you with your question. Also, I only dabbled in introduction to formal philosophy as side-projects to my CS education. Mostly I am self-taught, having spontaneously performed several of the text-book thought experiments on my own, as a child. Apologies if my nomenclature is not quite right for you. I also have a somewhat anti-authoritarian disposition. I tend to adopt or kidnap ideas I encounter without any urge to toe the line of their respective schools of thought.

Yes, I was referencing that being a bat problem. However, what I meant to emphasize was the spectrum of creatures. Most people grant inner lives to familiar animals even though we cannot claim to comprehend them directly. But, many people have incoherent beliefs as they work further down the ladder towards plants, colonies, and other simple life. Their need to find a boundary where the spark occurs is closeted dualism, in my opinion.

I consider a quest for qualia to be essentially dualism in disguise, and endless debates about the term itself to be proxy war on just how to beg the question. When I was lectured by Searle and his views on being a bat and his chinese room, I felt he was begging the question and playing to the peanut gallery. He wants you to assume there is "understanding" that a room lacks. He doesn't want you to dwell on the difference between the room and the system comprised of the room, the rules, the translation state, and the executive functions literally embodied by attendants. I enjoyed a contemporaneous course from Lakoff to consider his metaphorical mappings and category thinking as a counterpoint. We all liked to joke that it might be metaphors all the way down. My reading of these topics may have begun my own gradual turn towards reductionism and behaviorism.

I do not think qualia deserve any special place compared to any other abstraction. I don't think there is anything more real about the sense of first-person experience than there is about a sense of justice or love in a particular human interaction. I think that these are all in the same category of perceptual abstraction. I think ALL abstractions admit or even encourage an ontological failure mode which is to impose discrete symbols over fuzzy realities, and tempt logical deductions over this willful ignorance. This is why I frequently return to topics like identity, memory, and mental pathology which illustrates such failings. I question the "you", the "moment", and the "ride" as all being convenient but potentially misleading abstractions.

Finally, I think that the qualitative experience you seek to explain is tied up in the mind-body problem and has as much to do with introspection of the body and hormonal systems as it does introspection of cognition. I took interest in Rodney Brooks and his subsumption as a possible explanatory mechanism here. To keep insisting that you can discuss the quality of experience independently of the content is, again, a signal of latent dualism to me.


"That's clearly anti-materialist and has no basis in reality" ...so a point, a line, and a plane have no basis in reality. Poor Euclid. Poor Geometry students, wasting their time all those many generations.


Like I said, trying to condense it down to an HN comment imposes some very severe restrictions. The word "illusion" is not really quite right. A better description might be (and these are my words, not Dennett's) that consciousness is in its own ontological category [1].

[1] http://blog.rongarret.info/2015/02/31-flavors-of-ontology.ht...


Ugh. Does discussing ontological repercussions of anything ever brought a practical result?

Especially with such lack of logical rigor as shown when even otherwise decent scientists are talking about consciousness...

Mathematics has much more consistent definitions of categories and similar concepts with real measurable results and applications.


You didn't actually read the linked essay, did you? Because that's pretty much what it says.


Yes, albeit it indulges ontology with a few pages of text. Meandering until it gets to some of its points.


The 'hard problem' here is to explain how a set of physical processes give rise to consciousness or sensory experience at all.

Why is it hard? It's as if evolution had not provided us with the framework to do that. Primitive beings have very simple inputs and minds. As the behaviour needed for survival gets more complex, minds also need to join data from different senses in an integrated image. Also we need pain and pleasure to direct the behaviour. Quantitative changes result in qualitative ones and that's it.

Everybody acknowledges how the brain is such a complex machine and then got surprised that it's able to build perception. That's weird.


Imagine if I came to you and asked, "how is that bird flying?" It wouldn't be very useful for you to answer, "well, it figured out how so that it could escape predators."

You're describing a reason why consciousness might exist, but you're no closer to describing how it exists.

The reason it's a hard problem is that we don't understand the relationship between a purely physical process and awareness. It's not clear how at some point a complicated set of behaviors would equate to the personal experience we ourselves feel - we don't have good mechanisms or tools yet to explain how that jump is possible.

So yeah, it's obvious that the jump happens. But outside of philosophers and scientists like Daniel Dennett, probably the majority of people are already convinced of that part.

The hard problem isn't "why qualia", it's "how qualia".


Start with what it is. And do not mix it up with sentience or intelligence.

Soon you will discover that consciousness is an empty word much worse than dark matter or energy in physics, as those describe observable things with clear definitions.

What is a quale? Saying it is a quantum of subjective experience explains nothing because then you have to properly define experience in general. Trivial definitions end up at sensory levels which is probably not what was meant. Advanced definitions like say "symbolic representation of an experience" are more workable but anger philosophers. (it opens practical avenues of research such as: how humans assign symbols, what kinds of symbols they construct, how are symbols communicated, what are neutral correlates of experience that may relate to symbols, how are internal and external symbols discerned) Incidentally, AI may have such qualia in as much as it can self generate symbols.

Or you have to first define what consciousness is, and not in terms of qualia.

Or at least what is subjective without trying to say that is something that cannot be objectively quantified because this is a known lie.


> Saying it is a quantum of subjective experience explains nothing because then you have to properly define experience in general.

Qualia is experience.

Rocks don't experience things, they perform chemical reactions according to physics. In a world without qualia, there is no such thing as experience. The hard question is how chemical reactions result in something like experience. There's no logical, causal relationship we can find as to why a chemical reaction should result in one.

And even though we don't know much about what experience is or how it works, we seem to be universally aware of it. In fact we seem to have a mechanism to directly observe it. This is another thing that's weird to us and that makes "experience" special and interesting, because it's atypical - usually we can only observe things through external means; sight, touch, etc...

With "experience", this is reversed - almost as if by gaining the ability to directly observe a thing, we lose the ability to externally observe it. There is significant debate over this is an inherent property of the thing or whether advances in our measurement tools will allow us to either inderectly observe the thing, or at least observe its effects.

You're coming into this with the assumption that the ability to communicate about a thing is a prerequisite to that thing existing. It's not. The map describes the territory, not the other way around.

We don't have good definitions of what consciousness is, which is why it's hard and why physicists and philosophers want to study it. It's a thing that we can see and we are trying to learn its properties. As we learn those properties we continue to attempt to map it into another medium - language.


...we don't have good mechanisms or tools yet to explain how that jump is possible

Then why say that it's a hard problem, even that it's a problem at all? We will cross that bridge when we will reach it. And why do you say there's a jump? Or more precisely, which jump are you talking about? I hope you don't mean the jump between subjetive experience and the physical framework that supports it... Wittgenstein showed how that's a fool errand long ago.


I mean the existence of subjective experience in the first place. The jump is not just as Wittgenstein suggested that we might not be able to communicate a subjective experience, it's also that we don't understand how a subjective experience exists to communicate in the first place.

From a purely physical perspective, there shouldn't be such a thing as subjective experience.

We don't, strictly speaking, need to solve that problem right now. We could wait and cross that bridge later. We could just leave it at "it's an emergent property, and it works somehow." In the same way we could also say, "well, big objects attract each other, and we're gonna leave gravity at that."

But for various reasons, philosophers and scientists find that kind of answer unsatisfying.


From a purely physical perspective, there shouldn't be such a thing as subjective experience.

Why?

But for various reasons, philosophers and scientists find that kind of answer unsatisfying.

https://www.sciencealert.com/watch-richard-feynman-on-why-he...


Because we don't see any reason for it, and we don't have any precedent for something similar happening in other situations. We also observe that many properties of subjective experience are counterintuitive and seem to contradict what we would expect from an emergent system. This makes us curious and we want to learn more about it.

Feynman's response in the video you link is not that it's unreasonable to ask about the mechanics of a thing. Quite the opposite - it's that simple questions are often very deep and very complicated and can't be described in a quick one or two sentences.

What he's not saying is: "Why is the sky blue? Well, why shouldn't it be?"


Because we don't see any reason for it...

That's appeal to ignorance.

and we don't have any precedent for something similar happening in other situations

Precedent? Come on, that's like saying atoms shouldn't exist because there is no precedent for them.

We also observe that many properties of subjective experience are counterintuitive

That's vague and terribly subjective. Who decides what is intuitive and what's not in this context?

and seem to contradict what we would expect from an emergent system

Who would expect what? Again you are simply saying that you don't understand it so it must not be possible.

I linked Feynman response just to make you undestand that what you are saying lacks any reference frame. You are using your own intuitions to declare impossible a thing that, to begin with already exists :) and moreover it's exactly what generates you intuitions, so you are falling in a circular reasoning.

Oh and Feynman doesn't say that magnetism can't be explained or that it shouldn't exist, just that it can't be explained in terms of other phenomena, exactly the same as consciousness.

Edit: HN won't allow me to reply to your last comment, so I will put it below, though I won't comment further anyway.

It's not that I'm not interested in how consciousness emerges. What I object to is the characterization of the basis of consciousness as a hard problem. It's just an area of knowledge that we haven't enough information to talk about. Saying it's a problem, even a hard one, it's jumping to conclusions very prematurely.


> Again you are simply saying that you don't understand it so it must not be possible.

I... don't think subjective experience is impossible. I think that our current knowledge about how it works is very limited, there's a lot for us to learn, and I'm encouraged when philosophers and scientists research it more. I'm discouraged when people say "it exists, what else do we need to know?"

Are we saying the same thing in different ways?

When I say "jump", I don't mean that there is no explanation for subjective experience. I mean that we do not currently know what that explanation is. It's a jump because it is currently a gap in our knowledge about a phenomenon that exhibits interesting properties and that we don't know how to reliably measure or reproduce.

When we talk about things like consciousness we have to "jump" over that gap. It would be nice to fill in the gap a little. Is it possible you thought that I meant something else?

For a lot of people, basic questions like "what are atoms," or "why is there matter", or "how does a physical process create a subjective experience" are really interesting. And to a certain extent, why should atoms exist? It's not a bad question - we can start talking about electrons, and then we can start talking about forces, and then at a certain point we'll have to say, "well, we don't know about this thing yet. Ask again in a decade and maybe we'll have some theories to propose."

There are large branches of science devoted to these types of questions, and pursuing them has lead to useful theories and practical applications in the past.


>The 'hard problem' here is to explain how a set of physical processes give rise to consciousness or sensory experience at all. In other words, why the lights are on.

Dennett would reply: once I've explained every detail of the physiological system, and in so doing perfectly mapped inputs to outputs, what is there left to explain?


Dennet's reasoning is essentially: "I can't see or don't know anything beyond this line, so therefore nothing can exist beyond this line."

While attractive, it is not logically valid


I think it's more like, "there is no reason to suppose there is anything beyond this line because everything can be explained without crossing it", which is logically valid.


If I were to adopt this approach I would be dishonest - because every second of which I am aware shows me that there is something beyond the line.

DD may be different...


Or it shows you something which you erroneously conclude falls beyond that line.


It's about honesty - intellectual honesty. You / I can be mistaken, if it's an honest mistake then you/I will never know. Erroneously or not - the basic is that this is what you believe.


That is not what DD is claiming though. It is more along the lines of: "If X exists it is beyond this line. I can't explain anything beyond the line, so X is an illusion." That is what DD claims.

Maybe DD is a true zombie.


I disagree with your paraphrase. Why not cite a specific quote or passage from Dennett where he makes such a ridiculous claim.


"makes such a ridiculous claim"

https://plato.stanford.edu/entries/materialism-eliminative/

In another well-known article, “Quining Qualia” (1988), Dennett challenges not just our conception of pain, but all of our different notions of qualitative states. His argument focuses on the apparently essential features of qualia, including their inherent subjectivity and their private nature. Dennett discusses several cases—both actual and imaginary—to expose ways in which these ordinary intuitions about qualia pull apart. In so doing, Dennett suggests our qualia concepts are fundamentally confused and fail to correspond with the actual inner workings of our cognitive system.

X (qualia, what we don't know) does not correspond to Y (cognition, what we understand), so X does not exist.

Yes. It is this ridiculous. :)


Yeah, that's still incorrect. We have no reason to suppose X because Y can explain X as an illusion.

It's the exact same process that happens in every other science: you don't expand your axiomatic basis unless you have to.


> Yeah, that's still incorrect.

Saying it does not make it so.

> We have no reason to suppose X because Y can explain X as an illusion.

But Y does not really explain X. That is core point. Saying Y explains X is just igoring the question. It is just Dennett claiming "Y explains X" with a flawed argument.

> It's the exact same process that happens in every other science: you don't expand your axiomatic basis unless you have to.

Btw, when talking about axioms, there are rigid formalized versions of physics and math (look up Mizar). Unless Dennet proves his claims with a formal theorem with rigidly defined axioms, I am not buying his arguments. Too much fluff in philosophy when dealing with important problems.

Just the word "illusion" can have n different definitions and maybe the illusion itself is an illusion :D


> But Y does not really explain X. That is core point. Saying Y explains X is just igoring the question. It is just Dennett claiming "Y explains X" with a flawed argument.

We have no reason to believe that Y cannot explain X, and ample reason to believe it can. Every previous circumstance of special pleading of this sort has eventually fallen to scientific inquiry.

Furthermore, various observations and theorems in physics, like the conservation of energy and the Bekenstein Bound, suggest strongly that the mind is a fully encapsulated, bounded physical system.

The only way to escape this inevitability is to posit that consciousness is some extra-physical quantity that has no impact on physical matter whatsoever. And what problem does adding this to our axiomatic basis solve exactly? None that I can see except yet another instance of special pleading.

If anything, such a step actually introduces more problems, because how would you explain the fact that we, beings of physical matter, are talking about consciousness if consciousness cannot influence matter?


There are for sure things of this world that are not explicable, even in principle.


Such as?


"The line we placed there was arbitrary. The division it creates is the very cause of the problems of consciousness. Science experiments show that it is likely at the wrong location, no matter how confident we were when we drew it. I will attempt to solve these artificial problems of consciousness by showing that it is a miscategorization and not-well-justified to draw the line in the first place."


I'd argue that any such mapping of inputs and outputs, (fully mapping our mind) would equally describe to a system that is not conscious. In other words, the whole analysis could as well be describing a zombie that behaves like us but does not have experiences.

Put another way, it would fail to capture why that system (our brains) produce any experiences at all. I know this is disputed, of course. But I basically think the hard problem is a real thing: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness


Obviously, internal states and the actual system.

A memoryless and stateful system may produce similar or even same output given identical input. Since you cannot quantity infinite span of inputs, you cannot quite be sure which one you're dealing with.

This is even harder if both systems are noisy.

The mechanistic corollary is that you have to mess with whatever causes consciousness to actually understand it. Inject states, damage components and more. Even then you might make a mistake.


> consciousness is essentially an illusion

This sounds like a contradiction. What is being illuded? If there is no subject, there can't be an illusion.


When people imagine their consciousness/free will faculty as its own object, they sometimes see a theatre/stage, where they are in the audience and can observe the central point where consciousness ends up forming, or a deamon sitting in front of a bunch of levers and pulling one to make a conscious free will decision.

Yet, Dennett shows that there is no such single centralized point where consciousness emerges, just a brain creating that illusion to make a higher-level lossy sense of things (consequence of modeling your own cognitive processes using Descartes-era philosophy is that you have to place an artificial cut-off somewhere). Cognitive science has experiments like the Stroop Test [1], which proof that different cognitive faculties operate at different speeds and in parallel. They may even compete for attention. Consciousness is thus distributed and a "multi-agent" system, you may not even become "consciously" aware of what the fast operating faculties are processing and feeding up to more complex faculties. Other cognitive science reaction speed experiments have shown that data is processed (helping you make decisions) before it enters conscious thought (so all "behind the stage").

[1] https://en.wikipedia.org/wiki/Stroop_effect


There are different kinds of illusions. The world we see around us is an illusion created by the brain's sensory processing. But this illusion is good enough to allow us to act in the world. We don't call it illusion, we just call it the world.

If I think "I think", it is a vast simplification of processes going on in my brain, but it doesn't mean that there's no processes corresponding to my thinking "I think".

Most of the time I act as a single intelligent agent, not as a bunch of subsystems with no unifying goals. So this "illusion of consciousness", while being simplification of brain processes, falls in the category of things we usually don't call illusions.

And that's why Dennett's wording seems misleading to me.


> But this illusion is good enough to allow us to act in the world. We don't call it illusion, we just call it the world.

I like calling it a "world model". Optical illusions proof that this "world model" can be consistently tricked for a wide range of humans. Now the "illusion of consciousness" is a categorization error: You apply your "world model" to your own internal processes / consciousness. So far so good. But this does not give you the right to claim that consciousness is your "world model" view of it. If it was, then so could people who took LSD and had their "world model" believe they could swim in the sky, claim to change reality/ontology/the world for all of us.

The illusion is that your world model of consciousness does not equal consciousness in reality, as proven by science, despite how clearly it may appear to you (not that conscious experience itself is an illusion).

> Most of the time I act as a single intelligent agent, not as a bunch of subsystems with no unifying goals.

This is the illusion. To you it seems that you act as a single intelligent agent, unaware of the thousands of majority votes by your senses/neurons that lead up to that point.

Both your view as a single acting agent and this illusion are still legit. But again, it is a categorization error to conclude that your experience validates that humans are all individually acting single intelligent agents. That's Descartes-era philosophy and the legacy that Dennett was railing against: "I think therefor I am" becomes "I think that I am a single agent, therefor I am such".

You can compare "hunger" with "consciousness". Disease can make one feel not hungry, while medical investigation shows that the body is in desperate need of sustenance, or vice versa. Now is it an illusion/delusion to say: "I am hungry" when your body is already full? Maybe. But it becomes a mistake when you declare "I feel hungry, therefor my body needs more sustenance". That seemingly logical conclusion is the result of an illusion.


> becomes "I think that I am a single agent, therefor I am such"

No, it's "Most of the time I act as a single agent, therefore my perception of myself as a single agent is not an illusion, but a simplified model". It doesn't matter how many neurons are voted, if I'm doing what I was thinking I'm going to do. Experiments can poke at edge-cases where self-model is incorrect, but that's expected.

In a case of a brain, my self-model also runs on brain's neurons and is a part of what brain is doing. It makes it even less illusionary, than an internal model of the world.


I sometimes think I am like a storm glass or barometer. If there is a huge storm brewing that meter is going up or down. From my perspective I am doing exactly what I am thinking about doing, this storm really wants to make me go up or down. Of course, I have a limited view on reality, I may not even notice the storm itself.

But cognitive science, only through study of brain lesions and experiments, can offer glimpses of what is out there. What the weather really is like.

But what if the very act of categorization was an error to begin with? Causal inference poses problems like: Does the barometer change cause the storm, or does the storm cause the barometer change? These can be better solved by saying: The pressure in the barometer changing _is_ (part of) the storm. Instead of saying: If I go up, I cause the storm to follow ("If I am thinking I am a single agent, my consciousness must be singular").

In the end you are free, and I encourage you to, call it a simplified model, not an illusion. But to discard all of Dennett's consciousness philosophy on the basis of a poorly chosen word, is not a valid or fruitful conclusion. You'll miss the memetic good sauce that cures Naive Realism: https://en.wikipedia.org/wiki/Na%C3%AFve_realism

I myself, personally, prefer RAW's Maybe Logic approach to consciousness, though that it arguably less academically sound (though not less wise for it): https://www.youtube.com/watch?v=A7N6TOFyrLg


Is arithmetic (a part) of a calculator? No, it doesn't physically exists. Is arithmetic an illusion we need to explain away?


> When people imagine their consciousness/free will faculty as its own object, they sometimes see a theatre/stage, where they are in the audience and can observe the central point where consciousness ends up forming, or a deamon sitting in front of a bunch of levers and pulling one to make a conscious free will decision.

I don't know anyone who sees consciousness this way, but perhaps we've met different sorts of people.

Your second paragraph seems like a red herring. I don't see how any of this addresses the issue of conscious experience in the first place.


It addresses the issue by telling you: Don't worry about consciousness in the first place. How your higher-level cognitive faculties see and categorize consciousness is proven to be an illusion. Eliminate this singular consciousness, this artificial construct, and any hard or soft problem disappears with it, and we can start defining conscious experience proper, instead of trying the impossible and give a solution to everyone's subjective (and possibly eternally conflicting) theatric view of consciousness.

It is not always the goal (or possible) to create a complete model of conscious experience that is indistinguishable from reality, yet one may model the path of a hurricane without getting blown away by the wind.


And this is why, no matter how many downvotes it gets (and believe me, I get them for it), I will continue to say Dennett's book should have been titled "Consciousness Explained Away."

He dances around the thing pretty much everyone else means when they say "consciousness" or "the Hard Problem" and says of everything he can, "Welp, that's not the thing they're talking about!", except the thing we're talking about, which he more or less doesn't even acknowledge.

"'Blue' means only Pantone-292. The sky isn't Pantone-292. Therefore, the sky isn't blue."

Specious, isn't it?


I think it was because in a time where objectivism ruled, consciousness was still a highly subjective subject. "Consciousness Explained" was an effort to discover the other side of the coin.

A subjectivist sees art, and reasons that the art object is fully formed inside his/her mind. An objectivist sees art, and reasons that the art object is entirely contained in the physical manifestation of it. Consciousness Explained was an attempt to show this objectivist view of consciousness.

Later on he tried to marry both in heterophenomonology: "The sky seems blue to non-colorblind subjects, but objectively, in reality, it part of the human-visible spectrum of wave-length n.". This framework still gives legitimacy to the subjective experience of a colorblind person, without allowing this to change physical reality (for consciousness: allowing for personal experience and realization of it, without allowing this to change the neuroscience/ontology "i experience sequential thought, therefor thoughts must be sequential, not parallel.").


Nicely put.


If you like Dennett check out Julian Jayne's "the origin of consciousness in the breakdown of the bicameral mind" which was highly influential for Dennett. Jayne does a fantastic job pinning down what consciousness is and what it isn't.


Jayne is a fascinating character. He's so wrong, and yet you can't put the book down.


Why is he wrong?


I found Metzinger’s ‘The ego tunnel’ immensely helpful in identifying the constituent parts that we mean by consciousness. Everything the physicist Sean Carroll says about consciousness rings true to me. It is only an illusion in the same way the 2nd law of thermodynamics is an illusion built from particle interaction and statistics.


So ... not an illusion at all, then?


> the 2nd law of thermodynamics is an illusion

That is an excellent analogy. Perhaps "emergent property" would be a better term for both, but even that phrase doesn't quite capture the true spirit of Dennett's argument. It's not just the consciousness is emergent, it's that its true nature is actually very different from what we think it is.


The term Carroll uses is a weakly emergent property. You can watch both dennett, carroll and others discuss here: https://www.preposterousuniverse.com/naturalism2012/


It's an illusion in a similar way that multitasking on a single CPU is an illusion.


And yet, given some time interval, all threads do make progress within that interval (assuming its not too small of course). So if by multitasking we mean that execution of multiple threads progresses over time, then multitasking on a single CPU is real. The point is that one's perspective is relevant to what's correctly taken as real.

But what is the perspective of neural circuitry's acquaintance with neural circuitry? I don't know, but I know its not like our third person acquaintance with neural circuitry. And so saying phenomenal experience is an illusion of neural circuitry (and thus not real) is a mistake. It's like saying heat isn't real, only energetic particles are real, despite the fact that I just got burned by my hot pan.


You're correct, it's more correct to say it's an illusion of parallelism. On human timescales the parallelism appears real but it's really not.

> It's like saying heat isn't real, only energetic particles are real, despite the fact that I just got burned by my hot pan

Which is true, heat isn't real, just like my car isn't real. These are labels we apply to loose macroscopic phenomena. And so it is with consciousness.

At some level of abstraction, we can certainly talk about consciousness as something real since it's clearly a phenomenon requiring explication, but the "real" we're talking about in this sort of debate is some irreducible metaphysical existence, such as that posited by dualism.


>but the "real" we're talking about in this sort of debate is some irreducible metaphysical existence, such as that posited by dualism.

But this seems like a mistake. When someone like Dennett says that phenomenal consciousness doesn't exist, it seems like he's denying the reality of the appearances of phenomenal experience. That is, we say that phenomenal experience seems a certain way to us, but Dennett counters that this appearance is false. Denying the reality of the appearances of phenomenal experience is denying the existence of what most people take as the explanandum in this debate.

But there's a difference between denying a theory of the nature of a phenomena and denying the phenomena. We can say that phenomenal experience aren't irreducibly fundamental and not deny the existence of phenomenal experience. But illusion talk denies the phenomena, it's not merely saying it doesn't have a fundamental existence.


> But illusion talk denies the phenomena, it's not merely saying it doesn't have a fundamental existence.

I don't see how. An illusion is a perception entailing a false conclusion when taken at face value. The perception clearly exists, but what it entails is the illusion.

Qualia would then fall under the same category as other perceptive illusions, like optical illusions:

https://pixabay.com/en/pencil-bent-pencil-pencil-in-water-24...

Just so we're clear, when you say "appearance of phenomenal experience", I read, "the entailment of a perception". And it seems perfectly sensible to say that the entailments of perceptions can and often are false.


"Entailments of perception" is too broad and so doesn't pick out the right target here. My immediate perception entails that there is a red cup a short distance in front of me. Entailments about the outside world can certainly be false (e.g. that red is a property of the cup that lives in the outside world).

But what does the perception of a red cup entail about my inner state? Nothing that I can tell, except that I am having a perception of a specific kind. What does it mean to say that my perception of a specific kind is an illusion? It's hard to say. The perception gives me certain powers of discrimination that entail the necessary veracity of those aspects of the perception (i.e. I can tell red from blue, red from pain, red from pitch, etc). Are there aspects of perception that don't play a role in any kind of discrimination? Not that I can tell. And so perceptions themselves just don't seem like the kinds of things that can be properly called illusions.


An illusion to whom though?


To you, obviously. That's kind of the point: your own consciousness isn't really what you think it is. Yes, I know that sounds really weird to the point of being a logical impossibility, but trust me: there's experimental data to back it up.


What is "you"? That's the question of consciousness. Sure, maybe you entire internal and sensory existence is a play that your mind is watching. It's well known that a robot can be built to measure your decisions before you are aware that you mean. That doesn't answer the core question: What is the "you" that is aware of (the rest of your brain's) decisions, that is observing your thought process?


> What is "you"?

You is the thing (or, more accurately, the process) that is conducting this conversation.

> What is the "you" that is aware of (the rest of your brain's) decisions, that is observing your thought process?

But that is the whole point: you aren't actually aware of the rest of your brain's decisions. You think you are, but you're wrong. That is Dennett's thesis.


> there's experimental data to back it up

To back what up, exactly?


Dennett's thesis.


The thesis that there is no hard problem, or the thesis that we don't understand our own consciousness?


If you really want to know the answer to that, read the book. Life is to short to argue with anonymous trolls.


I ask a simple question and you call me a “troll”. Do you always get offended when someone presses you for details?


"Trust me" pretty much always means "don't".


In this case it's just a figure of speech. You don't actually have to trust me. You can read Dennett's book and see for yourself.


Dennett is becoming more and more part of the "old guard". Things are shifting towards other approaches. Of course, popularity doesn't necessarily mean anything, but keep that in mind when referring to him as an authority.


> Dennett is becoming more and more part of the "old guard". Things are shifting towards other approaches.

What evidence do you have of this? In fact, the majority of practicing philosophers are actually physicalists (56.5%):

https://philpapers.org/surveys/results.pl


I didn't refer to him as an authority. I just said his thesis resonated with me (but at 53 I guess I'm part of the "old guard" too). YMMV of course.


Is there an easy read where I can get a gist of where things are shifting?


panpsychism


If that's true – and I don't know if it is, even though panpsychism resonates with me personally – can you explain how panpsychism would conflict with the idea that consciousness is an illusion?

It seems like it'd be the next logical step: if consciousness is simply what (sensory) computations feel like "from the inside", then it stands to reason that other computations can also "feel like" something from the inside, regardless of the substrate they run on, no?


Everyone without much direct experience of what consciousness is will continuously become the "old guard."


Everyone who is not brain dead has direct experience of what consciousness is. It's the only thing that is directly experienced, in fact. Everything else is second-hand.


Everyone who has a computer has the same level of understanding about what a computer is? Mental.


You're missing the point so badly, I'm not even sure where to begin.


And I felt the same about your comment, but I still tried my best to explain where I think you're delusional.


Your question presupposes the existence of a subject and is therefore circular. An "illusion" does not need a subject, it's merely a fact that, taken naively at face value, entails a false conclusion.


What is your definition of "illusion" here, as I see you've used quotes? Even your description of it is as a fact that is taken naively at face value; who takes the fact naively, and at face value? Only a subject could do that. Information can only be knowledge when there's a "knower".


> Even your description of it is as a fact that is taken naively at face value; who takes the fact naively, and at face value. Only a subject could do that. Information can only be knowledge when there's a "knower".

That's incorrect, unless you consider machine learning algorithms to suddenly be subjects/"knowers". They suffer from some of the same perceptive illusions that plague our optical processing.

So to answer your question directly, there is no "who", there need only be a "what", at which point any problem with calling qualia an illusion evaporates. There is no irreducible subject involved, there is only an information processing automoton that integrates facts acquired from perceptions. When a perception is taken to be accurate but it entails a conclusion that is false, that's an illusion.


The algorithm is an interesting distinction to make.

I could argue that any machine learning algorithm is simply making probabilistic calculations based on input data in order to reach some kind of singular output. In this view (which I wouldn't discount), the machine learning algorithm is little more than a very advanced function. In my view, a function is not something I would consider a "subject" not would I consider it to have any ability to "perceive" an illusion. There's no sentience.

If we say the complexity of the algorithm is great enough that it can be said to "perceive", I would argue the algorithm does become a "who" instead of a "what". Still, there is a surprising amount of nuance to this concept that makes it very interesting.


"Information processing" doesn't create the internal feelings I feel, just as information processing can never create and apple for me to eat, even if it can perfectly simulate an apple and eating.


> "Information processing" doesn't create the internal feelings I feel,

Pure speculation, and almost certainly false. If you hadn't evolved the pleasurable feeling of eating an apple, or any other food, you would have died and your inability to feel would have gone with it. Feelings provide drive and motivation, they are functional.


I always found fascinating how, given enough time, organic matter evolved to the point where it became aware of itself, created mental forms to communicate with itself (language), named and categorized itself (human beings, plants, animals, rocks, etc) to try to understand itself.

The water that I drink has been inside countless beings for eons. My body is composed of elements that have been on this planet for eons. I think I'm separate from the rest of the planet, of the system, but am I, really?

I am like a VM inside an hypervisor. I think I'm separate from the whole, but that's only an illusion.

Our senses are powerful and allow us to navigate the world, but if we train ourselves to go beyond them, beyond the language we collectively created, beyond even our memories and thoughts/feelings associated with them, what will we find? Perhaps consciousness is something that should be studied individually?


I seems to me the way forward to understand consciousness scientifically is making computer simulation of parts of conscious awareness and comparing with how it works in real brains until the results are similar. "What I cannot create, I do not understand" as Feynman had on his blackboard. https://www.quora.com/What-did-Richard-Feynman-mean-when-he-...

Neural scene representation and rendering (deepmind.com) https://news.ycombinator.com/item?id=17313937

seems a step in that direction.


> What I cannot create, I do not understand

Does not imply that you necessarily understand what you can create.

In this instance, I feel creating something that behaves like a normal brain would not straightforwardly lead to an understanding. In the contrary, it may left us with more questions than answers.


> In this instance, I feel creating something that behaves like a normal brain would not straightforwardly lead to an understanding. In the contrary, it may left us with more questions than answers.

Depends on what level of abstraction you create it.

Two examples:

1. a person creates a complex video game using assembly language (Roller Coaster Tycoon). It can probably be said this person understands very intimately how every aspect of his game works at every level of the computer.

2. a person creates a complex video game using <Unity, Unreal, etc.>. It can probably be said that the person understands how their game works at a high level, but lower levels (memory, OS, graphics) are still magic to that person.


#1 certainly not. Chaotic systems (such as the game of life) are easy to build from scratch, but very hard to understand. Reduction is not understanding.


What are the moral implications of creating and destroying simulated consciousnesses?


A good start towards defining consciousness is to box it in with edge cases: a person in coma? a severely mentally retarded person? a person on hallucinogenic drugs? a dog? The answers to these edge cases can help us separate possibly entangled concepts that seem to come under the umbrella of consciousness.

I would concentrate on collecting empirical evidence, at the expense of developing further theoretical speculation along the lines of "p-zombies". I could see a Darwin-like figure traveling widely and recording detail minutely, and using it to make the case for simple unifying force behind an enormous spectrum of variation.


If controlling and isolating variables is crucial to objective science, and using our own consciousness is a requirement to test/observe consciousness, is it even possible for us to objectively test consciousness?


Is there a simple explanation for why consciousness (as an idea, or something distinct from unconsciousness) is necessary?

I think my wording was poor. I don't mean necessary for things that may exhibit consciousness. I mean why should consciousness even be considered to exist? Why is it a necessary complication to understanding the mind?


Does the idea of consciousness have any explanatory power? Does it pick out something substantively different than unconscious processes? If we were to replace all talk about consciousness with talk about molecules swirling about, would we lose something? I think the answer to these questions is yes. It seems that consciousness is a necessary feature of a complete explanation of the world in the same way trees and tables are. That is, even if these things can be reduced to more fundamental things, talk of tress and tables, and consciousness is still important.


You could probably use few more specific words to a greater effect. Such as self-model, world model, memory, information processing, directed action, responsiveness. Consciousness is a bit too underdefined a word. It is probably not as much of a whole as a tree or human as an organism is - it is not even persistent nor stable - and leaves no persistent traces in the world.

You don't get a consciousness carcass left behind or a "monad" as some current theorists postulated. At least we haven't found any.


But consciousness picks out something different from self-model, world model, information processing, etc. That is to say, none of these things sufficiently describes the processes we refer to when we talk about consciousness. A replacement concept would need to capture everything real about the term. We can replace water with H2O in any context and it makes sense. But there are contexts where we can meaningfully speak of consciousness where self-model (apparently) doesn't completely fit, e.g. "the pain of my stubbed toe overwhelmed my conscious experience".


Replace by "overrode my decision capability" (meaning you cannot act beyond reflex) or "overran my symbolic capability" (it was so big you cannot describe it) or perhaps overrode your other senses (couldn't sense anything else physically) or perhaps "stopped your self model from updating" (blocked integration of other perceptions).

Better? Definitely less poetic but perhaps more accurate.


This is a great question! I have some slides that may give some background on the evolutionary themes of this question. ( Starts at slide 20 https://docs.google.com/presentation/d/1pDZLkFTFjuZzM8lIKkuC... ). I personally don't have a real answer. But my understanding is that consciousness emerges as a simulation of the world in which we can test decisions without impacting our survival in the world, and is thus an evolutionary advantage. Why do we experience consciousness in the way that we do? I.e. why do we have a sense of self and agency? I think this has to do with the fact that we have a limited mental capacity and the complexity of the world requires us to focus on particular things, thus bringing all of our mind to a point -- which we experience as our identity. Is it possible for there to be intelligent beings that don't experience consciousness in the way we do? I think so, but now I think we are going into SciFi territory.


No, and some philosophies deem consciousness unnecessary/irrelevant. To them, consciousness is a byproduct of brain activity, much like the heat generated from a burning lamp.

https://en.wikipedia.org/wiki/Epiphenomenalism


I've a different question: why talk about necessity? Not all things that are, are "necessary". Also, necessity has a hint of teleology, which brings along its own host of issues.

It may just be a feature of this Universe, gradually being uncovered.


Perhaps from an evolutionary perspective being conscious helps you survive better than being unconscious?


Almost nothing is necessary. Frogs are not necessary. Calculus is not necessary. Yet these things exist, so we can ask what and how they are.

You sense that your own consciousness exists. so you can ask, what is this thing you sense? And what is this think sensing?


It is necessary, otherwise we wouldn't be observing this world, and nothing would seem to exist.


A pet hypothesis of mine: I think consciousness comes from our evolved brain capability to simulate and predict the behavior of others for the purpose of recognizing if we are conned. Consiousness then is being able to apply this capability to ourselves.



I recently read Bruce Hood's The Self Illusion: How the Social Brain Creates Identity. I think there are some interesting ideas there and in Matthew Lieberman's Social: Why Our Brains Are Wired to Connect about the evolutionary value of consciousness that are a bit of a different take, in that consciousness's advantage might be seen as an improvement to collaboration and that identity is more of a means to that end than an end in itself.


2011 "Explicit memory as a framework for the neural correlate of consciousness" https://www.dropbox.com/s/huol1vf4j1fs1ll/mind_matters.pdf?d...


the many problems in the study of consciousness are well documented (hence why this thread begins with "toward a...") -- so of course it must be studied in an interdisciplinary manner (what cognitive science attempts to do). can somebody more familiar with this work in particular explain what it contributes, or summarize the idea behind MICS for us?


too philosophical, how about figuring out first how we identify objects, parse sentences or recall things from memory before talking about consciousness


We can work on figuring out all of those in parallel.


There is a functional and descriptive science of the apparatus of our consciousness, i.e. the things you describe.

There is also the profound mystery of how individually 'unconscious' physical constituents of our nervous system can give rise to our conscious experience of the world.

The domain of the first is neuroscience and will yield its mysteries to us in time.

As for the Hard Problem of Consciousness as I've described it above, I have resigned myself to the belief that we will never be able to peel back that fundamental mystery. One could (and many have) write volumes on why this is the case, but I believe the simplest explanation of why the Hard Problem is Hard is that we are unable to even state the nature of the problem in a satisfactory way. Our language (our comprehension) fails when we try to probe at the root of consciousness.


Could it be that the inability to state the hard problem sensibly is because there is no such problem? I understand it's a somewhat unsatisfactory Dennetian response, but there is the possibility that as neuroscience untangles the softer problems of consciousness, the alleged hard problem will melt away.


I'm inclined to disagree with that view.

A complete physical understanding of the structure and dynamics of the brain and all of its parts would never indicate some subjective experience of consciousness. We only assume that because we know from our own experience that consciousness exists (for ourselves).

Saying it's not a problem is hand wavy. Go touch a hot stovetop and tell me that consciousness does not have a quality and value all its own, beyond the electrochemical correlates of the experience of pain.


The hot stovetop is an interesting example to use, because the muscle-related reaction of your hand removing itself from the hot stovetop is often not a conscious act. But we do consciously perceive the sensation of pain.

In this very specific instance (and others like it) our consciousness might only be reacting to the world around us and comprehending the current state of ourselves and the immediate environment, as opposed to taking a conscious action.


It's the difference between watching audio waveforms on an oscilloscope and listening to Beethoven's Ninth Symphony. These are not equivalent experiences, no matter how accurately the oscilloscope reproduces the waveform.


I'd call it waxing poetic. How do you know if your senses and modeling is not actually getting the data for the symphony from advanced oscilloscope?

Good old brain-in-a-jar experiment applies.


Sensing pain is not a prerequisite for consciousness. We do not even know if having any senses is - though complete lack thereof will cause problems with detecting consciousness. Due to lack of response.

The intriguing part here is to consider if a person in a sensory deprivation apparatus is actually conscious.


Also consider a person who is day-dreaming and one who is lucid dreaming. As with sensory deprivation, both may be having complex, self-aware cognition and remember it later to talk about it.


Of course not, but pain is a prime and salient feature of conscious experience.


Something in causing you to say that you have subjective experience, however, and that's a causal relation that could theoretically be unraveled.

It seems unlikely that your statements have nothing to do with your consciousness, and you just happen to accidentally tell the truth.


There's s a simple theory about the root of our consciousness: it's a property of individual cell.

Something like this: http://www.ucl.ac.uk/jonathan-edwards/publications/conscprop...

If this is true (I personally believe so), we need to focus on individual cells/bacteria for the answers.


I enjoyed the linked article.

In the introduction is a proposal/suggestion that "every neuron has at least some form of sentience."

The author describes this word "sentience" as a more primitive (primary, basic, ancestral) kind of consciousness. Something that has:

"simultaneous access to many elements of information in defined inter-relationships, i.e. access to a pattern"

"in which the accessible pattern includes a useful map of some other 'outer' environment, normally the outside of a human being, with a sense of time and, in its fullest form, adult consciousness, a sense of self"

..which seems like a range/spectrum from sentient to fully conscious.

With such a definition, a cell could be considered "sentient", and organisms from molds, plants, animals to humans all demonstrate higher sentience.. I wonder how low it can go: since patterns and relationships are mentioned, could a standing wave be considered to have a low level of "sentience"?


Sure, everything has zero-sentience. Zero-sentience correctly reflects the lack of a sense of itself. Some structures have other levels of sentience in addition to zero-sentience.


> One could (and many have) write volumes on why this is the case, but I believe the simplest explanation of why the Hard Problem is Hard is that we are unable to even state the nature of the problem in a satisfactory way.

Or the nature of our perceptions simply fools us into thinking there's actually a problem to solve.


Or our brains are tricking us into believing that they are universal knowing machines -- that is anything we cannot know cannot be, and anything that is we are capable of knowing.


Interesting figure of speech "our brains". Did you employ some other reasoning tool formulating that sentence? (Yes you did, quite a few. Those tools were designed using the brain too.)


I don't follow -- but I'm curious what you mean?


Elaborate?


The problem of qualia stems from taking certain properties of our perceptions at face value. Like subjectivity, which can't be explained by an appeal to third-person objective facts.

But subjectivity could very well be an illusion. Like how single CPU computers simulate multitasking, the maelstrom of conflicting signals constantly vying for dominance could create an illusion of "inner" and "outer" that we mistake for subjectivity, because we don't have a lens with which to observe this inner process.

See the following for a possible mechanistic explanation of subjectivity: https://www.frontiersin.org/articles/10.3389/fpsyg.2015.0050...


> But subjectivity could very well be an illusion.

This sounds like a contradiction. What is being illuded? If there is no subject, there can't be an illusion.


Man, If I had a nickel every time someone used that reply...

> If there is no subject, there can't be an illusion.

A common misconception, but incorrect. An illusion is simply a fact that, when taken at face value, entails a false conclusion. There is no reason a computer system can't be deceived by an illusion too, even one without "consciousness".


> Man, If I had a nickel every time someone used that reply...

Perhaps the reason you see it so frequently is because it's obvious?

> An illusion is simply a fact that, when taken at face value, entails a false conclusion.

An illusion is simply what fact? What does it mean to take it at "face value"? What "false conclusion" does it entail?

> There is no reason a computer system can't be deceived by an illusion too, even one without "consciousness".

I think you're equivocating here. What definition of illusion are you using?


> An illusion is simply what fact? What does it mean to take it at "face value"? What "false conclusion" does it entail?

A fact is an observation, a sensory input, a measurement, etc. The observation of a pencil in water [1], if taken at face value, entails a false conclusion.

If instead such a fact were integrated into a larger set of facts from which we infer a coherent picture of reality, a very different conclusion emerges.

No part of the above depends upon any sort of subject.

> I think you're equivocating here. What definition of illusion are you using?

I'm not. I'm using the same definition that I provided to you, namely that an illusion is a fact that naively entails a false conclusion. If you want a dictionary.com definition, "something that deceives by producing a false or misleading impression of reality".

Of course, you could jump on "deceives" again with some unnecessary appeal to subjectivity, which is why I say "entails a false conclusion" instead.

[1] https://pixabay.com/en/pencil-bent-pencil-pencil-in-water-24...


> There is also the profound mystery of how individually 'unconscious' physical constituents of our nervous system can give rise to our conscious experience of the world.

The simplest explanation is that consciousness is a sort of fundamental field.


Haha eventually with all this theory they will just declare "this is what consciousness is!" and tell all of us that our experiences of consciousness are invalid. Most likely we will sit down and take it, like ok, you win scientists. Then we will be replaced by dumb but efficient robots. Yay.


Kind of like they declared "this is what life is", and they were probably right to do so. Vitalism and dualistic theories of consciousness both belong in the dustbin of history.


I belong in the dustbin of history, we're hanging out in the dustbin and I'm throwing a party. Everyone can come wrestle in the mud.


I wonder sometimes if scientists will solve consciousness and realize that we should have just been partying and having fun with each other the whole time.


We picked capitalism, it was a choice, and it says some people get to party sometimes, but not all of the time, unless they did a really good job once.


Expect a huge turnout. You will be amazed at how many of us are going to come :)


> Vitalism and dualistic theories of consciousness both belong in the dustbin of history.

Science adopted materialism as it's guiding philosophy prematurely. We've since added electromagnetism and quantum theory to science, but still the hope for the falsification of vitalism persists.


What's premature about it? It seems to have been hugely successful given we're probably talking to each other from across the world.

One could argue that, epistemically at least, a mathematical monism might be better placed as a foundation, but that doesn't change any of the facts we've discovered based on materialism.

If you're in fact arguing for some sort of dualism, there is little reason to accept it beyond unconvincing philosophical speculation.


"Unconvincing philosophical speculation" sounds like, to use the legal term, "assumes facts not in evidence". But to me, claiming that consciousness does not require dualism also seems to require facts not in evidence, that is, it seems to be merely unconvincing philosophical speculation.


If you want the nuanced argument: not requiring dualism to explain consciousness is indeed a belief, but founded on a long history of scientific success in toppling similar matters, and an analogously long history of failure to convince that consciousness will not fit into a scientific framework (for analogous reasons of vitalism's failure to do the same for life).

So on balance of evidence, there is little reason to think dualism will ever be required and considerable reason to think it will not. So much so, that I firmly stand by my original statement.

To clarify the "unconvincing philosophical speculation", I meant thought experiments, like p-zombies, that are supposed to convince one that there is something more to consciousness than can be captured by materialism, but which primarily appeal to vague intuitions and make use of semantic tricks of natural language to convince people that there's something of substance going on.


If you think we have definitely answered the question of what life is... well you are going to do well in modern society and I am not. So you win.


> If you think we have definitely answered the question of what life is

No, but we realized that we don't need a definitive answer to recognize that there's no secret sauce as vitalism would require us to posit.


Yea I’m not a fan of vitalism, I like falsifiable theories. Thanks Popper.


Our medical ethics prevents us from directly studying consciousness in a way that would give us strong casual explanation. I'm unsure if we'd ever allow it actually.

We might get around the edges, but until you can show you can directly turn consciousness off (and back on again probably) in vivo, in the same way we try to do with every other "effective" medical treatment, the strength of any proof is going to be inconclusive.

The closest we'll probably get is eventually creating some system which convincingly displays the characteristics that humans recognize as "Consciousness." Hopefully we can stop talking about it then.


> until you can show you can directly turn consciousness off (and back on again

But we can, and we do -- every day. That's exactly what general anesthetics do. It's also what happens during sleep.


"The only thing we know about consciousness is that it is soluble in chloroform" ---Luca Turin


Hm really? Try more drugs, like ketamine. Dissociatives in general will give you an interesting perspective on these issues. Personally, I feel conscious in my dreams while sleeping. Also there are gradations of sleep state, such as the one where you don't get high quality sleep because you partially monitor your scary environment.


Evan Thompson explores these ideas further in his book 'Waking, Dreaming, Being' if you're so keen. A brilliant guy, who successfully integrates information from various cognitive disciplines, though he can be dry at times.


You seem to confuse observing that a phenomena happens with understanding it.

A good friend of mine from college is an anethestist and we've had long conversations about this and he says that consensus is that nobody knows how their drugs work. They just have extensive history with the "dials" they use so to speak.


I'm not saying that the existence of anesthetic technology means we understand consciousness. That's obviously not the case. I'm just pointing out that what the OP listed as the limiting factor on our understanding is already in hand.


Being able to turn off consciousness in a gross way is in no way equivalent to being able to prove that its origin is chemical.

A complete chemical theory of consciousness would need to explain not just how general anaesthetics work - a field which is much less comprehensive than most people realise - but also how exactly LSD, psiloscybin, DMT, and other hallucinogens produce all the very specific and very different effects they do.


Is there proof that anesthetics and sleep cause loss of consciousness, or only that they cause amnesia (retroactive discontinuity of conscioussnes), which we perhaps mis-name "loss of consciousness"


Loss of consciousness and amnesia turn out to be the exact same thing.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: