I would argue that this is a totally separate question. In terms of my core intuitions, I'm a hardcore dualist. Chalmers is my favorite philosopher. But the idea of something being "weakly conscious" makes plenty of sense to me. In fact, I have been weakly conscious--waking up from being knocked out for one reason or another. During the process of coming out if it, I remember being in a hazy state, where experiences didn't have the vividness or clarity that they normally do. Of course, it's possible that it's just my memories that are weak, but at any rate the idea that these experiences didn't have the full character of normal conscious experience doesn't strike me as any kind of evidence for materialism.
In my father's last months, I saw him medicated on morphine to the point of being near unconscious, and it's easy to imagine that he was having the same sort of weak experiences as I've had. None of this rattles my dualist intuitions at all.
What most dualists say is that it's incredibly mysterious that anything has any level of consciousness above zero. Saying consciousness doesn't need a special explanation because it's possible to be weakly conscious is like saying gravity doesn't need to be explained because it's possible for objects to have small amounts of gravitation. It's an unrelated issue.
But if your consciousness is separate from your physical brain, how come pumping certain chemicals affect consciousness on a very fundamental level ?
With your examples, you could explain those 'weak experiences' as some kind of signal-degradation between the incorporeal consciousness and the physical brain; but if you look at psychoactive drugs this explanation no longer flies.
I've taken psilocybin a few times and while there are sensory effects that you could explain as a transmission error it's a very tiny part of the experience. It really affects the way you think for a few hours, how can you explain that if consciousness is separate from the physical brain, why is it affected ?
Same goes for brain injury, people have complete changes in personality after suffering brain damage. How do you explain that if the brain is just a transceiver ?
Also, why do you trust your intuitions at all ? Intuition evolved to be able to deal with every day situations quickly. It breaks down when you start thinking about things outside human experience.
I would argue that it doesn't.
Consider the following changes to your conscious experience:
- You stub your toe. Your experience changes to include pain.
- Someone says your mother's name. Your experience changes to include thoughts of your mother.
- You take LSD. Your experience changes to include a range of thoughts, sensations, and hallucinations characteristic of a psychedelic experience.
Which of these would you say count as a change to the part of you that we have been referring to with the word 'consciousness'?
I would argue that none of these are changes to what your consciousness is; your conscious experience continues to simply be the feeling of what it's like to have your own thoughts.
As your thoughts change, your experience of having them changes, but your consciousness never ceases to be anything other than the experience of having your thoughts. On a "fundamental level" nothing about consciousness has changed. Your consciousness is still sitting in a movie theater, watching your brain play out - it's just watching a different movie.
My argument rests on the view that your consciousness is not the haver of thoughts, but is the unexplained witness to the actual haver of thoughts - the brain.
As evidence for this I'd ask you to imagine what would happen if we built up, piece by piece, a perfect computer model of your mind.
It would have the same thoughts, it would express the same ideas, and we could look at the state of the computer at each moment and explain exactly which structures in your mind led to, for example, whatever you choose to say in response to this comment.
At no point would we find some sort of mysterious glitch in the computation, a glitch that we would have to assume was the point where the magic extra-physical force of consciousness entered the picture and exerted its control.
The mind and its thoughts must work without consciousness, as such consciousness can only be the experience of watching the mind do its work, and not an integral part of its functioning.
This has loads of problems (if consciousness has some influence over behavior, then what's the physical mechanism? And if there's none, then why are we able to even talk about it?). But dualists argue that materialism is even worse because it leaves you with outright paradoxes.
So, on this account, a weakly conscious system is just a system that's physically organized in a way that generates only weak experiences.
You can claim that mental properties and physical properties interact without claiming that they are ontologically the same thing, or that one supervenes on (i.e. is just a result of the behaviour of) the other.
There are ways around the problem of interaction for substance dualists too, but they often rely on ad hoc interventions like God guaranteeing that the two worlds always match up (occasionalism).
In Philosophy of Mind, this is discussed in terms of different types of "supervenience" and whether the mind supervenes on the physical world in the same way as something like turbulence or digestion, or in a more limited sense where the physical world determines the content of experience but there's still more going on; and I think some say it doesn't supervene in any sense at all (but that probably just comes down to a disagreement about terminology).
Physicalists say consciousness supervenes on physical processes in the exact same way as higher-order physical properties like turbulence, digestion, being alive, etc. Dualists feel that still leaves something out.
So yes materialist answers to this question still have a lot of explaining to do, but dualism can't have a free ride either. It certainly doesn't seem to me to be simpler. At least in the physicalist explanation we can point to other epiphenomena and compare them to consciousness in broad ways. But what known, explicable, testable phenomena can we compare dualism to? It stands apart from all of science.
That's just silly. We taught Koko the gorilla how to talk, the effort was supremely insightful in helping us to understand animal cognition. Animals communicate with us all the time, they just do it non-verbally. We often just choose not to listen.
A talking lion would still be a lion. He wouldn't be able to tell us directly what life on the fields is like, but that's what the researchers are for.
I would not continue the 17th century delusion of non-sentient animals any further than we absolutely have to. Animals have exceedingly rich inner lives, and so do humans that don't have language. They just lack certain tooling. I'd put not having language on par with not having sight. A handicap for sure, but you're still a person and you still have thoughts.
There is, literally, a part of the brain that if you shut it off, you lose language. Dennett would have you believe that it's that one part that makes us human.
Through our experience communicating with Koko and with higher primates like chimps, we've come to understand how much 'more conscious' a chimp is than a gorilla, and how we can better assess consciousness without using language. We've seen that the chimp is aware of more of the world than is a gorilla, both inner and outer worlds -- the content of one's own mind and the minds of others.
This really isn't so different from what Dennett espouses, that consciousness is a continuum that begins with a lowly few-cell organism that's aware of nothing in the universe more than a sugar gradient, and ends with another organism (man?) that's attuned to the loftiest truths and actively strives to achieve ideal/complete consciousness, or Satori.
I suspect that on reflection, Dennett would not defend his own "lion-ness" argument for the same reasons you suggest. Koko is an existence proof to the contrary.
I think you're overstating your case.
"although the gorilla learned a large number of signs she never understood grammar or symbolic speech" - https://en.wikipedia.org/wiki/Great_ape_language#Koko
> humans that don't have language
Which humans are those? Language arises pretty spontaneously among humans even if they're deaf or blind - https://en.wikipedia.org/wiki/Nicaraguan_Sign_Language
Yet no amount of exposure will make a dog or ape understand language.
> There is, literally, a part of the brain that if you shut it off, you lose language.
You mean the entire left hemisphere?
> > You mean the entire left hemisphere?
probably not that, that's overkill. there are smaller, more specific areas in the brain directly associated with language faculties. two prominent examples are Broca's Area and Wernicke's Area which are both involved in speech comprehension (but not in control of the physiological apparatus for forming speech sounds).
But it's not. Lions can't talk, almost by definition. A talking lion is necessarily a creature so different from a regular lion that it can't reliably tell you what being a regular lion is like.
It's like how smartphones aren't really phones, they're computers with telephony capabilities. Calling them phones is ridiculous when you really think about it, but we apply that label for historical reasons based on their evolution from early phones. Still, opening up a smartphone won't really tell you anything about how regular phones work.
In other words, completely different brains than regular lions. So, not lions.
A lack of sight from birth results in visual cortex used for other things. Damage to language areas can result in loss of comprehension or production, but neighboring tissue can recover the functionality. The lines we draw are fairly blurry to be honest.
Dennett isn't saying that the language area makes the human but that language and narrative as a cognitive currency are what make us human; the neurobiology is incidental.
When I quiet my mind, look around me passively and simply experience the input of my senses, I can easily suppose that the experience I have in those moments is very similar to the experience of a dog or a chimp in a similar mental state. If I'm not using the higher functions of my brain such as language and only using those functions I share with other mammals, I don't see why I should expect my experience of them to be radically different.
Similarly, many mammals exhibit emotions and desires. We have those too and we evolved them from common ancestors. If a common ancestor of chimps and humans evolved these behaviors and we and chimps exhibit them, why should we necessarily expect the experience of them to be utterly and incomprehensibly different? Especially when many of the resulting behaviors in chimps and humans are so closely analogous and presumably also closely analogous to the behavior of our common ancestor? Surely any claim that we should expect them to be completely different or incomprehensible is the one that is extraordinary and needs to justify itself?
The statements, "Animals are sentient", and, "Animals are not sentient", are opposite positions extrapolated from the same axiomatic premises; that there is some thing called 'sentience', that humans 'have' this 'sentience', and animals either do or do not. Most commonly this kind of thinking also assumes that there is one kind of thing called sentience, (often referred to in religions as a soul). Your phrase, "lack certain tooling", is in line with this logic; it implies that there is some absolute definition, some core kernel of "inner experience" that is separable from ancillary layers that it may leverage. This is analogous to the statement that a computer may be running an operating system, whether it has a keyboard and speakers or not.
What Dennett argues, here and in general, is that this definition of sentience is not nearly holistic enough. That there is no one objective thing called 'sentience'. Instead, there are a bunch of different things; being a lion, being a bat, being a human. Each of these is a model of experience in and of itself, and is defined in and of itself; you can't factor out a common set of experiences or functions and expect them to translate. Part of being a human is being able to, amongst other things, see, hear, taste, and speak. These are not tools built on top of some common 'sentience interface'. To continue (dangerous and leaky) software metaphors, it's much more like a spaghetti-coded monolith; the features -are- the system, and their implementation is part of a feedback loop into the deepest parts of the system, and all the way back out again. The tools aren't 'used by' the 'sentience'; they _are_ the sentience, and the sentience is the tools. A practical demonstration to consider here is something like synesthesia, where what might superficially seem to be well-defined verticals are crossed and interwoven.
And so, when he (extending Wittgenstein) says, "If a lion could talk, we’d understand him just fine. He just wouldn’t help us understand anything about lions.”, he's saying that if such a thing as a talking lion existed, it wouldn't help us understand lions because talking lions and non-talking lions have different experiences, in part because they have different capabilities, different available tools, and thus different architectures. This isn't to say that they don't each have some rich inner life. Just that these inner lives are not mutually comprehensible, shared, or even compatible. In fact, the article, further down, pretty much says just this, albeit in many fewer words:
“If you think there’s a fixed meaning of the word ‘consciousness,’ and we’re searching for that, then you’re already making a mistake,”
 This line of thinking is very close to the "17th century delusion" of something like a homunculus. It is made more palatable by using a word like 'soul', however
 or, indeed, something like a neural network; the weights 'mean' something inside the system, but they're all relatively defined, not absolutely; you can't necessarily look inside the system and expect to understand what individual components 'mean' without addressing their entirely relative context.
Suggesting that "consciousness" is some kind of smooth continuum is just plain wrong. Empirically, it's more like a series of discrete abilities that kick in as more and more sophisticated neurological processes become available. (We don't really know that for sure, and Chalmers and the animists/dualists may turn out to be right. But that kind of dualism is very fuzzy, and if you're trying to build intelligent systems it's hard to do anything useful with it.)
One tell-tale is the mirror test. Some animals understand they are the creature in the mirror, while others see a different animal - and respond accordingly, driven by instinct.
That's a binary test. You can't "sort of" see a reflection of yourself.
It's true you can get it wrong some of the time - as a human, you can be tired or confused or drunk or ill or simply in a poorly lit environment.
But the ability to make the distinction, assuming you're functioning normally, is either there or it isn't. You either have the self-abstractive processing needed to operate with a self symbol and to understand that you're experiencing it in the mirror, or you don't. And if you don't, you never will.
There are many other possible binary tests, and consciousness seems to exist as a sum total of the pass/fail profile for all of them.
That's why the lion question is irrelevant - it doesn't go far enough to make this point. It assumes - on the basis of no neurological theory - that being able to talk would transform the instinctual and perceptual mechanisms built into lions so completely they'd become irrelevant, and you'd have something that looked like a lion but thought like a philosopher and was no longer even slightly interested in chasing after antelopes.
This is clearly wrong, because being able to talk hasn't made us stop feeling and acting like primates, or - often - even dumber animals.
With those kinds of details in mind, a multidimensional consciousness scale could be defined quite precisely.
But Dennett's fuzzy argument about being sort-of conscious and sort-of not is - I think - hand-wavey vagueness. It lacks the precision needed to do that usefully.
That's an interesting position for him, considering (IIRC) his co-author Douglas Hofstadter (The Mind's I) has a sister who is a perfectly normal & functional adult except for her complete incapacity for language.
"Our folks’ third and last child, Molly, born in Palo Alto, was, sadly, not what anybody had thought. By four or so, Molly was visibly abnormal — not saying any words at all, nor absorbing any. It wasn’t autism; it was a profound brain malfunction, probably dating from birth or prior to birth, but what was wrong, nobody could say — no diagnosis. Molly just didn’t pick up any words, who knows why, and our Mom and Dad had such anguish for so long on Molly’s account, as did Laura and I. What bad luck."
The second sentence of the quote does seem strongly opposed to what I know of Dennett's position.
I can't imagine a "normal functioning" human that cannot communicate in any way with others. Surely there are things she can do to express herself and understand others?
Her handlers were gorilla handlers, not linguists. They were willing to accept any combination of hand signs as communication, and they got to decide what was being "communicated".
The phenomenon was no different from dog owners who believe their dog understands English.
To me his perspective seems a lot like our primitive ancestors looking at the stars and calling them holes poked in the sky--case closed. But you don't have to believe in a soul or anything like that to suspect that there's something significant about self-awareness that we haven't figured out yet. I feel like Dennett is the odd man out here. I suppose we'll have to wait and see if anyone can prove him wrong.
On the contrary, he's just saying that conscious is not an irreducible phenomenon like some of his philosophical contemporaries believe. There's still a lot to learn about the reduction. In fact, his is the less lazy way out, because how consciousness reduces to physical laws is completely unknown, but if you take consciousness as irreducible in some way, then you have nothing else to explain: consciousness just is.
Of course, what they really need to do is put their phone away and actually focus on, you know, nothing.
One of the favourite analogies I've seen uses is that of a series of boats racing down the river. Normally you're on one, and you might move from boat to boat. Mindfulness meditation is stepping off onto the shore and paying attention to the boats racing past.
"Focusing on nothing" is concentration practice. You need a degree of concentration practice to calm your mind enough for mindfulness meditation, but mindfulness meditation itself is not a concentration practice, but an insight pratice.
The Zero shot translation for google translate is system generated language that a computer created itself. If we were to extrapolate that to a computer that can do the creation of sometjing and go about its own path of creation that is to some extent random - can't we create some form of conscious...? I side with Daniel - consciousness is a muscle the brain.
Maybe qualia arises trivially.
Solipsism strikes me as a more defensible position than consciousness as an emergent phenomenon in a purely materialist universe.
Dualism comes in a variety of forms. Examples include property dualism (Searle, though he denies it) and panpsychism (Chalmers, it seems). Regardless of the version expounded, all such dualisms divorce the mental from the physical, hence the dead horses that are the so-called mind-body problem or the problem of qualia. On this view of matter, we can't account for things like color because, by definition, matter does not have color in the way that we commonly understand it as having color. Physical theories, instead of explaining color, redefine color in other terms. The only place left to locate color, as it is commonly understood, is the mind that's been so brutally split off from matter. Fun fact: Cartesian dualism is not a "religious residue". It is a philosophical position. In its lingering incarnation, we can credit Descartes through whose work it bears an interesting relation to the development of modern empirical science. Prior to Cartesianism, Aristotelian views were dominant. Indeed, while the Roman Catholic Church does not have an "official" metaphysical doctrine (metaphysics is philosophical, not theological or doctrinal), the preferred metaphysical theoretical apparatus has been Aristotelian since Aquinas. No such dualism exists in Aristotle or Aquinas.
On the other hand, eliminativism manages to take an even whackier view of things than dualism. Whereas dualism has painted itself into a corner by refusing to reexamine its suppositions, eliminativism "resolves" the problem by shutting its eyes, that is, by denying the existence of those things it must explain. As a result, it is an incoherent position.
What's interesting is that Dennett does distinguish between "function" and "intention", though I'm not entirely sure how he reconciles (if he does at all) all of these with materialism. The reason I draw attention to this point is that teleology/final causality is frequently misunderstood owing to a confusion between conscious intent and function. When an Aristotelian talks about the "purpose" of an organ, he has in mind what the organ is ordered and organized toward, not what intelligent design theorists would describe as "design". (Interestingly, ID theories are also rooted in Cartesian ideas about matter and thus need to appeal to imposed, extrinsic divine intent to locate and explain the function of things like organs. Their scientistic (not scientific) opponents take the eliminativist approach and deny that organs have functions at all because they hold to the same concept of matter as ID theories while rejecting that divine intent exists. On the other hand, Aristotelianism maintains the intrinsic finality of things like organs and thus does not need to appeal to some externally imposed divine intent to explain function as such.)
As an aside, Chu spaces also provide a semantics for linear logic and are useful in understanding concurrency.
Not even remotely. I'm not even sure how you could possibly form such a conclusion, except by some deep misunderstanding of all of the various materialist arguments that discuss qualia. Dennett is himself a materialist by the way.
Materialism can classify qualia as an illusion while still recognizing that some reduction of qualia to physical laws is needed. This reduction would be left to the domain of science though, because that's where it would belong.
That's right, it's an illusion, in the sense that we perceive ourselves to have some irreducible subjectivity, but we actually don't.
> But when pressed enough, then he retreats to a just barely reasonable stance that's closer to Chalmers than the stances he spends most of his time communicating.
I'd like to see a citation for that, because as far as I've seen, Dennett hasn't changed his stance on this subject in over 20 years, and he's been very straightforward about his position in every interview I've watched or read.
And what he thinks is quite clear: he's a straight up materialist. Qualia don't actually exist as some irreducible phenomenon and consciousness is an illusion. He's published multiple books on the subject arguing for this position quite forcefully.
But can it? To say something is an illusion is to say that is really an experience of something else. And qualia are qualitative experiences, so the claim is that certain experiences are not experiences… which is problematic. Of course, a proper argument needs to be fleshed out more than that, but eliminativists face a real difficulty, unless they broaden the definition of materialism (which would take us closer to Aristotle).
"Selves" and zombies also crop up in these conversations, but they are neither here nor there. We're talking about the existence of things like "redness". Talk of "selves" is no doubt related to the Cartesian identification of mind and self, but something that is entirely irrelevant to the question at hand.
Funny how you keep calling materialism "absurd" and "incoherent", yet provide no coherent argument of your own to support this position. If anyone unfamiliar with this subject is reading this thread, rest assured that the anti-materialist sentiments espoused here are a minority view. A recent survey of academic philosophers found that the majority support a materialist philosophy of mind, so frankly, these charges of incoherency and absurdity don't pass a lay person's basic sniff test.
As for the existence of "redness" specifically, I can easily point out how the various thought experiments that allegedly support the existence of redness are fallacious. So instead of making further bold claims, would you care to present such an argument for scrutiny?
> Of course, if it is an illusion, then it still exists as an illusion, hence the incoherence of eliminativist materialism.
A car is also an illusion under materialism. But clearly I drove something to work this morning. So does this apparent incongruity entail some incoherency in materialism? Or is the problem really that you're attacking a straw man?
Your definition for "illusion" begs the question by simply assuming a subject is needed, ie. an "I" must experience an illusion. Rather, an illusion is simply describing the relation between perception and truth. If a perception, taken at face value, entails a false conclusion, then it's an illusion.
Even the basic dictionary definitions of illusion make no reference to a subject. They're all of the form of "a false idea or belief", or "a deceptive appearance or impression". Beliefs and appearances are attributes we can ascribe to mechanistic systems too, like computers, which can have sensors plugged into Bayesian inference engines that can infer false "beliefs".
So requiring a subject is a property that you have imposed on the meaning of illusion, it's not intrinsic to it.
> Any attempt to eliminate qualia or self or intentionality, etc. as being fundamental realities runs into the problem that all these things are more fundamental to my understanding than any proposed alternative.
1. Perception is fundamental to understanding. Whether experience is fundamental is very questionable.
2. Being fundamental doesn't entail something is irreducible. Being in a car is fundamental to driving on a road, that doesn't entail cars are irreducible.