Start with the definition: A conscious being is one which is conscious of itself.
Seems circular, but there is really no good non-recursive definition. This definition also seems to "ring" perfectly well with random people on the street.
It has been argued that any attempt at a non-recursive definition of consciousness includes things which most people don't consider to be conscious, see for example the paper titled "If Materialism Is True, the United States Is Probably Conscious".
Which leads to the problem of identification: how do we know that rocks are NOT conscious? Or, more interestingly, how I know the universe itself in its totality is not conscious? The universe certainly must have all the requisite "components" of consciousness, whatever those components are.
Once again, the process of identifying is recursive. A conscious being can only identify, with 100% certainty, its own consciousness. Any "other" thing could theoretically be a p-zombie or a Turing Test.
Corollary: No conscious being can know, with 100% certainty, that a particular entity is NOT conscious.
Could we have a science of a recursive thing? Perhaps, but only if we are willing and able to accept a recursive model with circular arguments.
You might be interested in a very short 2011 paper of mine, "A paradox related to the Turing Test", on page 90 here:
I'll paraphrase the paradox here. Suppose you can magically detect conscious entities. I begin speaking to you, and you are obliged to periodically guess whether or not I'm conscious.
Here's what I'll do. Whenever you are guessing that I'm conscious, my entire dialog will consist of nothing but "Uhhhh..." over and over, until you change your mind and start thinking I'm non-conscious. Whenever you are guessing that I'm non-conscious, then I'll speak normally.
Since I AM conscious, and you have your magic conscious-detecting ability, you should eventually reach a state where you're certain I'm conscious, and you don't need to change your mind any more thereafter. But once we've reached that state, which only takes finitely much time (so for all you know my whole dialogue could've been a tape recording), thereafter I only ever say "Uhhhh..." A non-conscious machine could do that, so do you change your mind? If you do, contradiction, if you don't, contradiction.
The problem still stands though. We can't be sure that the "gland" connects to metaphysical consciousness and not to the metaphysical cloud processor.
Nowadays the "gland" is usually mysterious quantum processes.
The longer you meditate upon this very silly paradox, the more unsettling it becomes :)
Consciousness: the degree to which something is self-aware.
Self-aware: being conscious.
There you go. /s
For example, monkeys are sentient. (presuming the self discrimination test is valid) Some birds are not known to be sentient. Are they conscious? Who knows. Consciousness is a misdefined grab bag of things.
You are(ser) a conscious being, but you are(estar) not conscious in that state.
The issue is not with the op's argument, it is with the nature of (our) reality. We are not saying anything because we are talking semantics. This is about definitions, not reality. The differences between a rock and a human mind are semantic constructs built on sensory constructs. Semantics are all circular.. and we lack the philosophical construct to understand circular argument.
What is the purpose of this question if a rock is anything but? I'm not arguing you're wrong I'm arguing that if there is a debate here it's circularly inconsistent because it presupposes that words have meaning by sole virtue of their own ineptitude. I see no proof that truth is knowable by a mind (in our world, anyway) so at some level I agree with you that it's circular on a multitude of levels but this argument goes no where and to argue the other side of it (that a rock is a mind) to me amounts to someone saying something along the lines of "words have no purpose" it's even worse than circular dependancy it's circularly _inconsistent_.
Are we supposed to limit our understanding of reality by limiting the questions we ask of the language with which we articulate it?
Or another way, how can we ask what consciousness is for a person or an animal or a computer, of we cannot answer it of a rock?
It is not at all to suggest that words have no purpose or are meaningless. Words are currently our best definition of consciousness.
However... the recursive nature of consciousness doesn't stop us from saying that rocks aren't conscious. You still need a medium that supports recursive operations of the necessary depth and complexity. Rocks don't seem to have such characteristics.
As far as I can tell, consciousness is literally the one thing in the universe that can't be an illusion. Even if, in the extreme case, we're brains in a vat, etc.
Cogito ergo sum!
"I have convinced myself that there is absolutely nothing in the world, no sky, no earth, no minds, no bodies. Does it now follow that I too do not exist? No: if I convinced myself of something then I certainly existed. But there is a deceiver of supreme power and cunning who is deliberately and constantly deceiving me. In that case I too undoubtedly exist, if he is deceiving me; and let him deceive me as much as he can, he will never bring it about that I am nothing so long as I think that I am something. So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind."
But subjective experience isn't the light being "on" or "off". Subjective experience is a messy, fuzzy collection of stuff. People do things without intending to, people believe things they've automatically were intentional and reverse. The zone between waking and sleeping isn't set, etc. etc.
If one does the "blind spot" experiment, one realizes one's vision is constantly confabulating to make for vision's imperfections. And so-forth.
So there's no reason no to expect that our experience and memory of consciousness isn't a incomplete reflection of what's actually happening.
Lots of people strongly believe they have a soul. That's clearly anti-materialist and has no basis in reality but it seems like a quite natural illusion (one that the "hard problem" ideology resembles). The "subjective experience exists" statement, if it's taken to mean a single, uniform fabric of experience exists, seems unjustified given the overall messiness I mentioned and if the hard-problem claim isn't this, what is it?
I have no idea if anyone else subjectively experiences anything; everyone else might just be philosophical zombies. I still want to know why I have any subjective awareness of the world, though, and I don't see any way that that question can be answered via any sort of objective measurement or scientific process. You can study my eyes and my brain to explain why I can accurately perceive the color blue, but nobody can truly explain what it's like to see the color blue.
This has no bearing on the hard problem of describing consciousness or how organic material (maybe silicon some day) gives rise to it. It also doesn't indicate how a physicalist picture of the mind could, in principle, capture the phenomena.
Dream, delusion, reality, drugs, schizophrenia, completely fabricated sensory manifold, whatever; the content does not matter for purposes of this question. Is that brain in a jar experiencing something? Is it experiencing at all?
Humans always liked egocentric views -- "The Earth is the center of the Universe". Or another version: "Humans are superior because they have consciousness".
If there are many levels of consciousness, one may imagine some extraterrestrial super intelligent beings that are many levels above humans, that would think of humans as non-consciousness beings -- like we think of many animals.
Now, if we go down instead, we can say that all animals with brain are conscious (just as not as we are), can't we?
In fact, I submit that it's the duty of anyone saying that the line separating "conscious" and "not-conscious" animals is here or there has to justify not just why that's where they drew the line, but that there is one in the first place.
The experince of having an experince can be wrong. If the existinece of experience is the content of the experience - and you have written that the content can be wrong. If one mistaken about the very fact of experience, is that valid an experience too?
This has no bearing on the hard problem of describing consciousness or how organic material (maybe silicon some day) gives rise to it.
This is assumes beliefs are entirely separate from experiences but that's counter to the complex experience characteristic of consciousness - one can experience believing many things, including contradictory things if one isn't careful. It's narrative that claims to make things uncertain but actually, implicitly maintains a neat, certain rational narrator on top of sensory experience (sensory experience, right or wrong, is spoken of a mere data evaluated by a separate rational actor). However both neurological research and reflection shows the beliefs, consciousness, experience and so-forth are all meshed
We're not talking about the content of our experiences, and as long as you keep insisting that's somehow part of the picture, we're never even having a conversation. "Consciousness," as we're using it, is the simple fact that you're experiencing at all. Nothing else. Not the content. Not the feelings about the content. Not the "neat, certain rational mirror on top of sensory experience". Just the raw, unfiltered, unqualified fact that you are having an experience at all. Full stop.
Everything else is detail, and can very easily be wrong. The fact that there is an experience happening in the first place, can't.
Otherwise, please explain to me how I can be mistaken about the fact that I am experiencing. Not what I'm experiencing — that I'm experiencing.
If you interact closely with people having psychotic breaks and other mental disorders, you can witness someone mistaken about the boundary between themselves and the environment. If you can wrap your mind around that level of malfunction, I think you will begin to appreciate that the very statement "the fact that I am experiencing" is begging the question. The abstract concepts of identity, self, perception, and memory are all intertwined and more nebulous than is comfortable to think about.
Let me borrow a helpful phrase from another reply above: inner life. I recommend some meditation on the possible inner lives of a whole spectrum of creatures. I take the liberty of assuming you would grant humans the richest inner life. What inner life exists in a great ape, dolphin, octopus, monkey, dog, cow, bat, field mouse, tarantula, lobster, honey bee, earthworm, clam, coral, or amoeba? Within any one of those species, how does an inner life vary between a zygote and a fully developed and experienced adult? What about members of those species who have a sensory disability and go their whole lives with a reduced complement of sensory organs or sensory nerves?
Consciousness may be merely an abstraction we place on a typical complex of sensory processing and introspective abilities in our minds. People may have more or less experience with alteration of this complex and form other abstractions such as unconsciousness, trance, daydream, hallucination, blackout, or catatonia.
Outside weird scenarios I mentioned in my previous post, we can only discuss the subset of these experiences which we remember. We don't even know what other qualitative experiences we may be having throughout our lives which are not normally encoded to memory. At risk of returning to my previous topic which you discard as mere "content" of awareness: I would argue that our introspection is just as vulnerable to mistake, confabulation, and delusion as our awareness of the external world. It may be a convenient fiction, much like our visual system papers over the blind spot in our retinas and the motion blur of rapid eye movements.
> If you assume dualism...
Which I don't. My position on this stuff is in the neighborhood of panpsychism or Objective Idealism, but not quite either, and both of which are very much monist notions.
If you weren't accusing me of holding a dualist position, could you please clarify what you're saying here?
> I recommend some meditation on the possible inner lives of a whole spectrum of creatures.
I've actually spent a fair amount of time contemplating the question of the nature of the inner lives of things not-me, thanks. I can't speak to what that experience might be like. Epistemic asymmetry is a thing. What I don't spend much time contemplating is that they have one.
It's interesting, though, that you bring that up — bats, specifically. Please read Nagel's "What is it like to be a bat?" if you haven't already. It's one of the first papers I'm aware of to meaningfully articulate this question. Dennett, himself — the thinker whose oeuvre, and particularly "Consciousness Explained", frames this discussion — called it "the most widely cited and influential thought experiment about consciousness."
So, for clarity, my position goes like this: I have an experience of the thing I mean when I say "consciousness" — this "inner life", which, until I'm convinced is somehow a delusion, I will continue to regard as the only direct experience I have. On the principle that solipsism is bullshit, I assume that the other beings like me with whom I share the world also have this 'inner life' thing going on.
Additionally, I see in the world other things which are clearly "beings", but which are, to varying degrees, not like me and the other beings I assume have inner lives. What about them is different, such that they wouldn't have this experience? Is it language? Many have that. Tool use? More than a few. Warm blood? (See: "Do Fish Feel Pain?")
It is, I submit, most parsimonious to assume consciousness of some form on their part, as well. When someone suggests otherwise, they're making a positive assertion, not merely that this imaginary line between "conscious" and "not conscious" goes in that specific place, but that there is a line in the first place, and must justify both.
> I would argue that our introspection is just as vulnerable to mistake, confabulation, and delusion as our awareness of the external world.
Of course it is. I don't think that's even in question. But that's actually my point: when you're introspecting, that's "you", there, in that moment, on that ride. Whether the things you're introspecting about are materially reflective of their antecedents, whether you're thinking clearly or biasedly, whether even your entire narrative in that moment, if not always, is just so much self-serving bullshit, does not matter. You're still introspecting.
That's what I mean when I say the content doesn't matter. Whatever I'm experiencing, true or false, I'm still experiencing.
The question no-one can answer to my satisfaction — all this hand-wringing about its being irrelevant, or illusory, or even incorrect notwithstanding — is how, if the only "things" that matter are reducible to observable, measurable, falsifiable phenomena, it should feel at all.
Because the thing I most want those tools to explain is how I can be here, in existence, having the experience of being here questioning my existence at all.
There is precisely nothing in this kind of reductive ontology that can account for the fact that there is a qualitative nature to my being at all. And yet it is the most direct experience I have. Not what I see, that I see. And, yes, the emotional associations I have with what I see, upon seeing it. Of course that comes from a different part of the brain. Yet I feel that too.
That's the question I want answered — the very thing being hand-waved away.
Yes, I was referencing that being a bat problem. However, what I meant to emphasize was the spectrum of creatures. Most people grant inner lives to familiar animals even though we cannot claim to comprehend them directly. But, many people have incoherent beliefs as they work further down the ladder towards plants, colonies, and other simple life. Their need to find a boundary where the spark occurs is closeted dualism, in my opinion.
I consider a quest for qualia to be essentially dualism in disguise, and endless debates about the term itself to be proxy war on just how to beg the question. When I was lectured by Searle and his views on being a bat and his chinese room, I felt he was begging the question and playing to the peanut gallery. He wants you to assume there is "understanding" that a room lacks. He doesn't want you to dwell on the difference between the room and the system comprised of the room, the rules, the translation state, and the executive functions literally embodied by attendants. I enjoyed a contemporaneous course from Lakoff to consider his metaphorical mappings and category thinking as a counterpoint. We all liked to joke that it might be metaphors all the way down. My reading of these topics may have begun my own gradual turn towards reductionism and behaviorism.
I do not think qualia deserve any special place compared to any other abstraction. I don't think there is anything more real about the sense of first-person experience than there is about a sense of justice or love in a particular human interaction. I think that these are all in the same category of perceptual abstraction. I think ALL abstractions admit or even encourage an ontological failure mode which is to impose discrete symbols over fuzzy realities, and tempt logical deductions over this willful ignorance. This is why I frequently return to topics like identity, memory, and mental pathology which illustrates such failings. I question the "you", the "moment", and the "ride" as all being convenient but potentially misleading abstractions.
Finally, I think that the qualitative experience you seek to explain is tied up in the mind-body problem and has as much to do with introspection of the body and hormonal systems as it does introspection of cognition. I took interest in Rodney Brooks and his subsumption as a possible explanatory mechanism here. To keep insisting that you can discuss the quality of experience independently of the content is, again, a signal of latent dualism to me.
Especially with such lack of logical rigor as shown when even otherwise decent scientists are talking about consciousness...
Mathematics has much more consistent definitions of categories and similar concepts with real measurable results and applications.
Why is it hard? It's as if evolution had not provided us with the framework to do that. Primitive beings have very simple inputs and minds. As the behaviour needed for survival gets more complex, minds also need to join data from different senses in an integrated image. Also we need pain and pleasure to direct the behaviour. Quantitative changes result in qualitative ones and that's it.
Everybody acknowledges how the brain is such a complex machine and then got surprised that it's able to build perception. That's weird.
You're describing a reason why consciousness might exist, but you're no closer to describing how it exists.
The reason it's a hard problem is that we don't understand the relationship between a purely physical process and awareness. It's not clear how at some point a complicated set of behaviors would equate to the personal experience we ourselves feel - we don't have good mechanisms or tools yet to explain how that jump is possible.
So yeah, it's obvious that the jump happens. But outside of philosophers and scientists like Daniel Dennett, probably the majority of people are already convinced of that part.
The hard problem isn't "why qualia", it's "how qualia".
Soon you will discover that consciousness is an empty word much worse than dark matter or energy in physics, as those describe observable things with clear definitions.
What is a quale? Saying it is a quantum of subjective experience explains nothing because then you have to properly define experience in general. Trivial definitions end up at sensory levels which is probably not what was meant.
Advanced definitions like say "symbolic representation of an experience" are more workable but anger philosophers.
(it opens practical avenues of research such as: how humans assign symbols, what kinds of symbols they construct, how are symbols communicated, what are neutral correlates of experience that may relate to symbols, how are internal and external symbols discerned)
Incidentally, AI may have such qualia in as much as it can self generate symbols.
Or you have to first define what consciousness is, and not in terms of qualia.
Or at least what is subjective without trying to say that is something that cannot be objectively quantified because this is a known lie.
Qualia is experience.
Rocks don't experience things, they perform chemical reactions according to physics. In a world without qualia, there is no such thing as experience. The hard question is how chemical reactions result in something like experience. There's no logical, causal relationship we can find as to why a chemical reaction should result in one.
And even though we don't know much about what experience is or how it works, we seem to be universally aware of it. In fact we seem to have a mechanism to directly observe it. This is another thing that's weird to us and that makes "experience" special and interesting, because it's atypical - usually we can only observe things through external means; sight, touch, etc...
With "experience", this is reversed - almost as if by gaining the ability to directly observe a thing, we lose the ability to externally observe it. There is significant debate over this is an inherent property of the thing or whether advances in our measurement tools will allow us to either inderectly observe the thing, or at least observe its effects.
You're coming into this with the assumption that the ability to communicate about a thing is a prerequisite to that thing existing. It's not. The map describes the territory, not the other way around.
We don't have good definitions of what consciousness is, which is why it's hard and why physicists and philosophers want to study it. It's a thing that we can see and we are trying to learn its properties. As we learn those properties we continue to attempt to map it into another medium - language.
Then why say that it's a hard problem, even that it's a problem at all? We will cross that bridge when we will reach it. And why do you say there's a jump? Or more precisely, which jump are you talking about? I hope you don't mean the jump between subjetive experience and the physical framework that supports it... Wittgenstein showed how that's a fool errand long ago.
From a purely physical perspective, there shouldn't be such a thing as subjective experience.
We don't, strictly speaking, need to solve that problem right now. We could wait and cross that bridge later. We could just leave it at "it's an emergent property, and it works somehow." In the same way we could also say, "well, big objects attract each other, and we're gonna leave gravity at that."
But for various reasons, philosophers and scientists find that kind of answer unsatisfying.
Feynman's response in the video you link is not that it's unreasonable to ask about the mechanics of a thing. Quite the opposite - it's that simple questions are often very deep and very complicated and can't be described in a quick one or two sentences.
What he's not saying is: "Why is the sky blue? Well, why shouldn't it be?"
That's appeal to ignorance.
and we don't have any precedent for something similar happening in other situations
Precedent? Come on, that's like saying atoms shouldn't exist because there is no precedent for them.
We also observe that many properties of subjective experience are counterintuitive
That's vague and terribly subjective. Who decides what is intuitive and what's not in this context?
and seem to contradict what we would expect from an emergent system
Who would expect what? Again you are simply saying that you don't understand it so it must not be possible.
I linked Feynman response just to make you undestand that what you are saying lacks any reference frame. You are using your own intuitions to declare impossible a thing that, to begin with already exists :) and moreover it's exactly what generates you intuitions, so you are falling in a circular reasoning.
Oh and Feynman doesn't say that magnetism can't be explained or that it shouldn't exist, just that it can't be explained in terms of other phenomena, exactly the same as consciousness.
Edit: HN won't allow me to reply to your last comment, so I will put it below, though I won't comment further anyway.
It's not that I'm not interested in how consciousness emerges. What I object to is the characterization of the basis of consciousness as a hard problem. It's just an area of knowledge that we haven't enough information to talk about. Saying it's a problem, even a hard one, it's jumping to conclusions very prematurely.
I... don't think subjective experience is impossible. I think that our current knowledge about how it works is very limited, there's a lot for us to learn, and I'm encouraged when philosophers and scientists research it more. I'm discouraged when people say "it exists, what else do we need to know?"
Are we saying the same thing in different ways?
When I say "jump", I don't mean that there is no explanation for subjective experience. I mean that we do not currently know what that explanation is. It's a jump because it is currently a gap in our knowledge about a phenomenon that exhibits interesting properties and that we don't know how to reliably measure or reproduce.
When we talk about things like consciousness we have to "jump" over that gap. It would be nice to fill in the gap a little. Is it possible you thought that I meant something else?
For a lot of people, basic questions like "what are atoms," or "why is there matter", or "how does a physical process create a subjective experience" are really interesting. And to a certain extent, why should atoms exist? It's not a bad question - we can start talking about electrons, and then we can start talking about forces, and then at a certain point we'll have to say, "well, we don't know about this thing yet. Ask again in a decade and maybe we'll have some theories to propose."
There are large branches of science devoted to these types of questions, and pursuing them has lead to useful theories and practical applications in the past.
Dennett would reply: once I've explained every detail of the physiological system, and in so doing perfectly mapped inputs to outputs, what is there left to explain?
While attractive, it is not logically valid
DD may be different...
Maybe DD is a true zombie.
In another well-known article, “Quining Qualia” (1988), Dennett challenges not just our conception of pain, but all of our different notions of qualitative states. His argument focuses on the apparently essential features of qualia, including their inherent subjectivity and their private nature. Dennett discusses several cases—both actual and imaginary—to expose ways in which these ordinary intuitions about qualia pull apart. In so doing, Dennett suggests our qualia concepts are fundamentally confused and fail to correspond with the actual inner workings of our cognitive system.
X (qualia, what we don't know) does not correspond to Y (cognition, what we understand), so X does not exist.
Yes. It is this ridiculous. :)
It's the exact same process that happens in every other science: you don't expand your axiomatic basis unless you have to.
Saying it does not make it so.
> We have no reason to suppose X because Y can explain X as an illusion.
But Y does not really explain X. That is core point. Saying Y explains X is just igoring the question. It is just Dennett claiming "Y explains X" with a flawed argument.
> It's the exact same process that happens in every other science: you don't expand your axiomatic basis unless you have to.
Btw, when talking about axioms, there are rigid formalized versions of physics and math (look up Mizar). Unless Dennet proves his claims with a formal theorem with rigidly defined axioms, I am not buying his arguments. Too much fluff in philosophy when dealing with important problems.
Just the word "illusion" can have n different definitions and maybe the illusion itself is an illusion :D
We have no reason to believe that Y cannot explain X, and ample reason to believe it can. Every previous circumstance of special pleading of this sort has eventually fallen to scientific inquiry.
Furthermore, various observations and theorems in physics, like the conservation of energy and the Bekenstein Bound, suggest strongly that the mind is a fully encapsulated, bounded physical system.
The only way to escape this inevitability is to posit that consciousness is some extra-physical quantity that has no impact on physical matter whatsoever. And what problem does adding this to our axiomatic basis solve exactly? None that I can see except yet another instance of special pleading.
If anything, such a step actually introduces more problems, because how would you explain the fact that we, beings of physical matter, are talking about consciousness if consciousness cannot influence matter?
Put another way, it would fail to capture why that system (our brains) produce any experiences at all. I know this is disputed, of course. But I basically think the hard problem is a real thing:
A memoryless and stateful system may produce similar or even same output given identical input. Since you cannot quantity infinite span of inputs, you cannot quite be sure which one you're dealing with.
This is even harder if both systems are noisy.
The mechanistic corollary is that you have to mess with whatever causes consciousness to actually understand it. Inject states, damage components and more. Even then you might make a mistake.
This sounds like a contradiction. What is being illuded? If there is no subject, there can't be an illusion.
Yet, Dennett shows that there is no such single centralized point where consciousness emerges, just a brain creating that illusion to make a higher-level lossy sense of things (consequence of modeling your own cognitive processes using Descartes-era philosophy is that you have to place an artificial cut-off somewhere). Cognitive science has experiments like the Stroop Test , which proof that different cognitive faculties operate at different speeds and in parallel. They may even compete for attention. Consciousness is thus distributed and a "multi-agent" system, you may not even become "consciously" aware of what the fast operating faculties are processing and feeding up to more complex faculties. Other cognitive science reaction speed experiments have shown that data is processed (helping you make decisions) before it enters conscious thought (so all "behind the stage").
If I think "I think", it is a vast simplification of processes going on in my brain, but it doesn't mean that there's no processes corresponding to my thinking "I think".
Most of the time I act as a single intelligent agent, not as a bunch of subsystems with no unifying goals. So this "illusion of consciousness", while being simplification of brain processes, falls in the category of things we usually don't call illusions.
And that's why Dennett's wording seems misleading to me.
I like calling it a "world model". Optical illusions proof that this "world model" can be consistently tricked for a wide range of humans. Now the "illusion of consciousness" is a categorization error: You apply your "world model" to your own internal processes / consciousness. So far so good. But this does not give you the right to claim that consciousness is your "world model" view of it. If it was, then so could people who took LSD and had their "world model" believe they could swim in the sky, claim to change reality/ontology/the world for all of us.
The illusion is that your world model of consciousness does not equal consciousness in reality, as proven by science, despite how clearly it may appear to you (not that conscious experience itself is an illusion).
> Most of the time I act as a single intelligent agent, not as a bunch of subsystems with no unifying goals.
This is the illusion. To you it seems that you act as a single intelligent agent, unaware of the thousands of majority votes by your senses/neurons that lead up to that point.
Both your view as a single acting agent and this illusion are still legit. But again, it is a categorization error to conclude that your experience validates that humans are all individually acting single intelligent agents. That's Descartes-era philosophy and the legacy that Dennett was railing against: "I think therefor I am" becomes "I think that I am a single agent, therefor I am such".
You can compare "hunger" with "consciousness". Disease can make one feel not hungry, while medical investigation shows that the body is in desperate need of sustenance, or vice versa. Now is it an illusion/delusion to say: "I am hungry" when your body is already full? Maybe. But it becomes a mistake when you declare "I feel hungry, therefor my body needs more sustenance". That seemingly logical conclusion is the result of an illusion.
No, it's "Most of the time I act as a single agent, therefore my perception of myself as a single agent is not an illusion, but a simplified model". It doesn't matter how many neurons are voted, if I'm doing what I was thinking I'm going to do. Experiments can poke at edge-cases where self-model is incorrect, but that's expected.
In a case of a brain, my self-model also runs on brain's neurons and is a part of what brain is doing. It makes it even less illusionary, than an internal model of the world.
But cognitive science, only through study of brain lesions and experiments, can offer glimpses of what is out there. What the weather really is like.
But what if the very act of categorization was an error to begin with? Causal inference poses problems like: Does the barometer change cause the storm, or does the storm cause the barometer change? These can be better solved by saying: The pressure in the barometer changing _is_ (part of) the storm. Instead of saying: If I go up, I cause the storm to follow ("If I am thinking I am a single agent, my consciousness must be singular").
In the end you are free, and I encourage you to, call it a simplified model, not an illusion. But to discard all of Dennett's consciousness philosophy on the basis of a poorly chosen word, is not a valid or fruitful conclusion. You'll miss the memetic good sauce that cures Naive Realism: https://en.wikipedia.org/wiki/Na%C3%AFve_realism
I myself, personally, prefer RAW's Maybe Logic approach to consciousness, though that it arguably less academically sound (though not less wise for it): https://www.youtube.com/watch?v=A7N6TOFyrLg
I don't know anyone who sees consciousness this way, but perhaps we've met different sorts of people.
Your second paragraph seems like a red herring. I don't see how any of this addresses the issue of conscious experience in the first place.
It is not always the goal (or possible) to create a complete model of conscious experience that is indistinguishable from reality, yet one may model the path of a hurricane without getting blown away by the wind.
He dances around the thing pretty much everyone else means when they say "consciousness" or "the Hard Problem" and says of everything he can, "Welp, that's not the thing they're talking about!", except the thing we're talking about, which he more or less doesn't even acknowledge.
"'Blue' means only Pantone-292. The sky isn't Pantone-292. Therefore, the sky isn't blue."
Specious, isn't it?
A subjectivist sees art, and reasons that the art object is fully formed inside his/her mind. An objectivist sees art, and reasons that the art object is entirely contained in the physical manifestation of it. Consciousness Explained was an attempt to show this objectivist view of consciousness.
Later on he tried to marry both in heterophenomonology: "The sky seems blue to non-colorblind subjects, but objectively, in reality, it part of the human-visible spectrum of wave-length n.". This framework still gives legitimacy to the subjective experience of a colorblind person, without allowing this to change physical reality (for consciousness: allowing for personal experience and realization of it, without allowing this to change the neuroscience/ontology "i experience sequential thought, therefor thoughts must be sequential, not parallel.").
That is an excellent analogy. Perhaps "emergent property" would be a better term for both, but even that phrase doesn't quite capture the true spirit of Dennett's argument. It's not just the consciousness is emergent, it's that its true nature is actually very different from what we think it is.
But what is the perspective of neural circuitry's acquaintance with neural circuitry? I don't know, but I know its not like our third person acquaintance with neural circuitry. And so saying phenomenal experience is an illusion of neural circuitry (and thus not real) is a mistake. It's like saying heat isn't real, only energetic particles are real, despite the fact that I just got burned by my hot pan.
> It's like saying heat isn't real, only energetic particles are real, despite the fact that I just got burned by my hot pan
Which is true, heat isn't real, just like my car isn't real. These are labels we apply to loose macroscopic phenomena. And so it is with consciousness.
At some level of abstraction, we can certainly talk about consciousness as something real since it's clearly a phenomenon requiring explication, but the "real" we're talking about in this sort of debate is some irreducible metaphysical existence, such as that posited by dualism.
But this seems like a mistake. When someone like Dennett says that phenomenal consciousness doesn't exist, it seems like he's denying the reality of the appearances of phenomenal experience. That is, we say that phenomenal experience seems a certain way to us, but Dennett counters that this appearance is false. Denying the reality of the appearances of phenomenal experience is denying the existence of what most people take as the explanandum in this debate.
But there's a difference between denying a theory of the nature of a phenomena and denying the phenomena. We can say that phenomenal experience aren't irreducibly fundamental and not deny the existence of phenomenal experience. But illusion talk denies the phenomena, it's not merely saying it doesn't have a fundamental existence.
I don't see how. An illusion is a perception entailing a false conclusion when taken at face value. The perception clearly exists, but what it entails is the illusion.
Qualia would then fall under the same category as other perceptive illusions, like optical illusions:
Just so we're clear, when you say "appearance of phenomenal experience", I read, "the entailment of a perception". And it seems perfectly sensible to say that the entailments of perceptions can and often are false.
But what does the perception of a red cup entail about my inner state? Nothing that I can tell, except that I am having a perception of a specific kind. What does it mean to say that my perception of a specific kind is an illusion? It's hard to say. The perception gives me certain powers of discrimination that entail the necessary veracity of those aspects of the perception (i.e. I can tell red from blue, red from pain, red from pitch, etc). Are there aspects of perception that don't play a role in any kind of discrimination? Not that I can tell. And so perceptions themselves just don't seem like the kinds of things that can be properly called illusions.
You is the thing (or, more accurately, the process) that is conducting this conversation.
> What is the "you" that is aware of (the rest of your brain's) decisions, that is observing your thought process?
But that is the whole point: you aren't actually aware of the rest of your brain's decisions. You think you are, but you're wrong. That is Dennett's thesis.
To back what up, exactly?
What evidence do you have of this? In fact, the majority of practicing philosophers are actually physicalists (56.5%):
It seems like it'd be the next logical step: if consciousness is simply what (sensory) computations feel like "from the inside", then it stands to reason that other computations can also "feel like" something from the inside, regardless of the substrate they run on, no?
That's incorrect, unless you consider machine learning algorithms to suddenly be subjects/"knowers". They suffer from some of the same perceptive illusions that plague our optical processing.
So to answer your question directly, there is no "who", there need only be a "what", at which point any problem with calling qualia an illusion evaporates. There is no irreducible subject involved, there is only an information processing automoton that integrates facts acquired from perceptions. When a perception is taken to be accurate but it entails a conclusion that is false, that's an illusion.
I could argue that any machine learning algorithm is simply making probabilistic calculations based on input data in order to reach some kind of singular output. In this view (which I wouldn't discount), the machine learning algorithm is little more than a very advanced function. In my view, a function is not something I would consider a "subject" not would I consider it to have any ability to "perceive" an illusion. There's no sentience.
If we say the complexity of the algorithm is great enough that it can be said to "perceive", I would argue the algorithm does become a "who" instead of a "what". Still, there is a surprising amount of nuance to this concept that makes it very interesting.
Pure speculation, and almost certainly false. If you hadn't evolved the pleasurable feeling of eating an apple, or any other food, you would have died and your inability to feel would have gone with it. Feelings provide drive and motivation, they are functional.
The water that I drink has been inside countless beings for eons. My body is composed of elements that have been on this planet for eons. I think I'm separate from the rest of the planet, of the system, but am I, really?
I am like a VM inside an hypervisor. I think I'm separate from the whole, but that's only an illusion.
Our senses are powerful and allow us to navigate the world, but if we train ourselves to go beyond them, beyond the language we collectively created, beyond even our memories and thoughts/feelings associated with them, what will we find? Perhaps consciousness is something that should be studied individually?
Neural scene representation and rendering (deepmind.com) https://news.ycombinator.com/item?id=17313937
seems a step in that direction.
Does not imply that you necessarily understand what you can create.
In this instance, I feel creating something that behaves like a normal brain would not straightforwardly lead to an understanding. In the contrary, it may left us with more questions than answers.
Depends on what level of abstraction you create it.
1. a person creates a complex video game using assembly language (Roller Coaster Tycoon). It can probably be said this person understands very intimately how every aspect of his game works at every level of the computer.
2. a person creates a complex video game using <Unity, Unreal, etc.>. It can probably be said that the person understands how their game works at a high level, but lower levels (memory, OS, graphics) are still magic to that person.
I would concentrate on collecting empirical evidence, at the expense of developing further theoretical speculation along the lines of "p-zombies". I could see a Darwin-like figure traveling widely and recording detail minutely, and using it to make the case for simple unifying force behind an enormous spectrum of variation.
I think my wording was poor. I don't mean necessary for things that may exhibit consciousness. I mean why should consciousness even be considered to exist? Why is it a necessary complication to understanding the mind?
You don't get a consciousness carcass left behind or a "monad" as some current theorists postulated. At least we haven't found any.
Better? Definitely less poetic but perhaps more accurate.
It may just be a feature of this Universe, gradually being uncovered.
You sense that your own consciousness exists. so you can ask, what is this thing you sense? And what is this think sensing?
There is also the profound mystery of how individually 'unconscious' physical constituents of our nervous system can give rise to our conscious experience of the world.
The domain of the first is neuroscience and will yield its mysteries to us in time.
As for the Hard Problem of Consciousness as I've described it above, I have resigned myself to the belief that we will never be able to peel back that fundamental mystery. One could (and many have) write volumes on why this is the case, but I believe the simplest explanation of why the Hard Problem is Hard is that we are unable to even state the nature of the problem in a satisfactory way. Our language (our comprehension) fails when we try to probe at the root of consciousness.
A complete physical understanding of the structure and dynamics of the brain and all of its parts would never indicate some subjective experience of consciousness. We only assume that because we know from our own experience that consciousness exists (for ourselves).
Saying it's not a problem is hand wavy. Go touch a hot stovetop and tell me that consciousness does not have a quality and value all its own, beyond the electrochemical correlates of the experience of pain.
In this very specific instance (and others like it) our consciousness might only be reacting to the world around us and comprehending the current state of ourselves and the immediate environment, as opposed to taking a conscious action.
Good old brain-in-a-jar experiment applies.
The intriguing part here is to consider if a person in a sensory deprivation apparatus is actually conscious.
It seems unlikely that your statements have nothing to do with your consciousness, and you just happen to accidentally tell the truth.
Something like this: http://www.ucl.ac.uk/jonathan-edwards/publications/conscprop...
If this is true (I personally believe so), we need to focus on individual cells/bacteria for the answers.
In the introduction is a proposal/suggestion that "every neuron has at least some form of sentience."
The author describes this word "sentience" as a more primitive (primary, basic, ancestral) kind of consciousness. Something that has:
"simultaneous access to many elements of information in defined inter-relationships, i.e. access to a pattern"
"in which the accessible pattern includes a useful map of some other 'outer' environment, normally the outside of a human being, with a sense of time and, in its fullest form, adult consciousness, a sense of self"
..which seems like a range/spectrum from sentient to fully conscious.
With such a definition, a cell could be considered "sentient", and organisms from molds, plants, animals to humans all demonstrate higher sentience.. I wonder how low it can go: since patterns and relationships are mentioned, could a standing wave be considered to have a low level of "sentience"?
Or the nature of our perceptions simply fools us into thinking there's actually a problem to solve.
But subjectivity could very well be an illusion. Like how single CPU computers simulate multitasking, the maelstrom of conflicting signals constantly vying for dominance could create an illusion of "inner" and "outer" that we mistake for subjectivity, because we don't have a lens with which to observe this inner process.
See the following for a possible mechanistic explanation of subjectivity: https://www.frontiersin.org/articles/10.3389/fpsyg.2015.0050...
> If there is no subject, there can't be an illusion.
A common misconception, but incorrect. An illusion is simply a fact that, when taken at face value, entails a false conclusion. There is no reason a computer system can't be deceived by an illusion too, even one without "consciousness".
Perhaps the reason you see it so frequently is because it's obvious?
> An illusion is simply a fact that, when taken at face value, entails a false conclusion.
An illusion is simply what fact? What does it mean to take it at "face value"? What "false conclusion" does it entail?
> There is no reason a computer system can't be deceived by an illusion too, even one without "consciousness".
I think you're equivocating here. What definition of illusion are you using?
A fact is an observation, a sensory input, a measurement, etc. The observation of a pencil in water , if taken at face value, entails a false conclusion.
If instead such a fact were integrated into a larger set of facts from which we infer a coherent picture of reality, a very different conclusion emerges.
No part of the above depends upon any sort of subject.
> I think you're equivocating here. What definition of illusion are you using?
I'm not. I'm using the same definition that I provided to you, namely that an illusion is a fact that naively entails a false conclusion. If you want a dictionary.com definition, "something that deceives by producing a false or misleading impression of reality".
Of course, you could jump on "deceives" again with some unnecessary appeal to subjectivity, which is why I say "entails a false conclusion" instead.
The simplest explanation is that consciousness is a sort of fundamental field.
Science adopted materialism as it's guiding philosophy prematurely. We've since added electromagnetism and quantum theory to science, but still the hope for the falsification of vitalism persists.
One could argue that, epistemically at least, a mathematical monism might be better placed as a foundation, but that doesn't change any of the facts we've discovered based on materialism.
If you're in fact arguing for some sort of dualism, there is little reason to accept it beyond unconvincing philosophical speculation.
So on balance of evidence, there is little reason to think dualism will ever be required and considerable reason to think it will not. So much so, that I firmly stand by my original statement.
To clarify the "unconvincing philosophical speculation", I meant thought experiments, like p-zombies, that are supposed to convince one that there is something more to consciousness than can be captured by materialism, but which primarily appeal to vague intuitions and make use of semantic tricks of natural language to convince people that there's something of substance going on.
No, but we realized that we don't need a definitive answer to recognize that there's no secret sauce as vitalism would require us to posit.
We might get around the edges, but until you can show you can directly turn consciousness off (and back on again probably) in vivo, in the same way we try to do with every other "effective" medical treatment, the strength of any proof is going to be inconclusive.
The closest we'll probably get is eventually creating some system which convincingly displays the characteristics that humans recognize as "Consciousness." Hopefully we can stop talking about it then.
But we can, and we do -- every day. That's exactly what general anesthetics do. It's also what happens during sleep.
A good friend of mine from college is an anethestist and we've had long conversations about this and he says that consensus is that nobody knows how their drugs work. They just have extensive history with the "dials" they use so to speak.
A complete chemical theory of consciousness would need to explain not just how general anaesthetics work - a field which is much less comprehensive than most people realise - but also how exactly LSD, psiloscybin, DMT, and other hallucinogens produce all the very specific and very different effects they do.