Hacker News new | past | comments | ask | show | jobs | submit login
Daniel Dennett’s Science of the Soul (newyorker.com)
127 points by sergeant3 on Mar 23, 2017 | hide | past | web | favorite | 61 comments



There's a lot of talk in here about whether consciousness is binary, whether it can turn on and off like a lightswitch, as if that's a good proxy for dualism vs. materialism.

I would argue that this is a totally separate question. In terms of my core intuitions, I'm a hardcore dualist. Chalmers is my favorite philosopher. But the idea of something being "weakly conscious" makes plenty of sense to me. In fact, I have been weakly conscious--waking up from being knocked out for one reason or another. During the process of coming out if it, I remember being in a hazy state, where experiences didn't have the vividness or clarity that they normally do. Of course, it's possible that it's just my memories that are weak, but at any rate the idea that these experiences didn't have the full character of normal conscious experience doesn't strike me as any kind of evidence for materialism.

In my father's last months, I saw him medicated on morphine to the point of being near unconscious, and it's easy to imagine that he was having the same sort of weak experiences as I've had. None of this rattles my dualist intuitions at all.

What most dualists say is that it's incredibly mysterious that anything has any level of consciousness above zero. Saying consciousness doesn't need a special explanation because it's possible to be weakly conscious is like saying gravity doesn't need to be explained because it's possible for objects to have small amounts of gravitation. It's an unrelated issue.


> In my father's last months, I saw him medicated on morphine to the point of being near unconscious, and it's easy to imagine that he was having the same sort of weak experiences as I've had. None of this rattles my dualist intuitions at all.

But if your consciousness is separate from your physical brain, how come pumping certain chemicals affect consciousness on a very fundamental level ?

With your examples, you could explain those 'weak experiences' as some kind of signal-degradation between the incorporeal consciousness and the physical brain; but if you look at psychoactive drugs this explanation no longer flies.

I've taken psilocybin a few times and while there are sensory effects that you could explain as a transmission error it's a very tiny part of the experience. It really affects the way you think for a few hours, how can you explain that if consciousness is separate from the physical brain, why is it affected ?

Same goes for brain injury, people have complete changes in personality after suffering brain damage. How do you explain that if the brain is just a transceiver ?

Also, why do you trust your intuitions at all ? Intuition evolved to be able to deal with every day situations quickly. It breaks down when you start thinking about things outside human experience.


> But if your consciousness is separate from your physical brain, how come pumping certain chemicals affect consciousness on a very fundamental level ?

I would argue that it doesn't.

Consider the following changes to your conscious experience:

- You stub your toe. Your experience changes to include pain.

- Someone says your mother's name. Your experience changes to include thoughts of your mother.

- You take LSD. Your experience changes to include a range of thoughts, sensations, and hallucinations characteristic of a psychedelic experience.

Which of these would you say count as a change to the part of you that we have been referring to with the word 'consciousness'?

I would argue that none of these are changes to what your consciousness is; your conscious experience continues to simply be the feeling of what it's like to have your own thoughts.

As your thoughts change, your experience of having them changes, but your consciousness never ceases to be anything other than the experience of having your thoughts. On a "fundamental level" nothing about consciousness has changed. Your consciousness is still sitting in a movie theater, watching your brain play out - it's just watching a different movie.

My argument rests on the view that your consciousness is not the haver of thoughts, but is the unexplained witness to the actual haver of thoughts - the brain.

As evidence for this I'd ask you to imagine what would happen if we built up, piece by piece, a perfect computer model of your mind.

It would have the same thoughts, it would express the same ideas, and we could look at the state of the computer at each moment and explain exactly which structures in your mind led to, for example, whatever you choose to say in response to this comment.

At no point would we find some sort of mysterious glitch in the computation, a glitch that we would have to assume was the point where the magic extra-physical force of consciousness entered the picture and exerted its control.

The mind and its thoughts must work without consciousness, as such consciousness can only be the experience of watching the mind do its work, and not an integral part of its functioning.


Modern, sophisticated dualists tend to think that consciousness just sits "on top" of brain activity, experiences are generated based on what's going on the brain. Many are "epiphenomenalists," who say that the content is determined only by brain activity, so the information flow is one way: brain -> conscious experience.

This has loads of problems (if consciousness has some influence over behavior, then what's the physical mechanism? And if there's none, then why are we able to even talk about it?). But dualists argue that materialism is even worse because it leaves you with outright paradoxes.

So, on this account, a weakly conscious system is just a system that's physically organized in a way that generates only weak experiences.


This is the (one of the) difference(s) between property and substance dualism.

You can claim that mental properties and physical properties interact without claiming that they are ontologically the same thing, or that one supervenes on (i.e. is just a result of the behaviour of) the other.

There are ways around the problem of interaction for substance dualists too, but they often rely on ad hoc interventions like God guaranteeing that the two worlds always match up (occasionalism).


But is conciousness really an absolute quality like gravitation, or a qualitative one like say turbulence? Are you a turbulence dualist? If conciousness arises from certain physical arrangements and processes of matter at a macroscopic scale, like turbulence, then we can absolutely talk about it arising from non-concious matter just as we can talk about turbulence arising from static or ordered matter.


Dualists tend to have the intuition that it's an absolute quality--or at least, something with a special status beyond a higher-order description of patterns in physical activity. Your turbulence example is perfect: I can't imagine a fishtank that is atom-for-atom identical to a system that's experiencing turbulence, but somehow isn't "really" turbulent. I can easily imagine a system that is in the exact same state as my brain, but isn't having any internal experiences.

In Philosophy of Mind, this is discussed in terms of different types of "supervenience" and whether the mind supervenes on the physical world in the same way as something like turbulence or digestion, or in a more limited sense where the physical world determines the content of experience but there's still more going on; and I think some say it doesn't supervene in any sense at all (but that probably just comes down to a disagreement about terminology).

Physicalists say consciousness supervenes on physical processes in the exact same way as higher-order physical properties like turbulence, digestion, being alive, etc. Dualists feel that still leaves something out.


But if it's an absolute quality, it shouldn't depend on the arrangement or state of brain matter at all. The mass of a group of atoms is an absolute quality that doesn't change depending on how you move them about. Yet our experience of consciousness depends totally and critically in highly testable ways on the arrangement of atoms in our neurons. There doesn't seem to be much absolute about it at all. Even tiny changes can eliminate it completely. I just find it very hard to credit the absolute quality proposition, yet I don't see how dualism can avoid it.

So yes materialist answers to this question still have a lot of explaining to do, but dualism can't have a free ride either. It certainly doesn't seem to me to be simpler. At least in the physicalist explanation we can point to other epiphenomena and compare them to consciousness in broad ways. But what known, explicable, testable phenomena can we compare dualism to? It stands apart from all of science.


If you're interested in the philsiophical implications of being "semi conscious" check out section 3 of http://consc.net/papers/qualia.html


> “Right,” Dennett replied. “He would be so different from regular lions that he wouldn’t tell us what it’s like to be a lion. I think we should just get used to the fact that the human concepts we apply so comfortably in our everyday lives apply only sort of to animals.”

That's just silly. We taught Koko the gorilla how to talk, the effort was supremely insightful in helping us to understand animal cognition. Animals communicate with us all the time, they just do it non-verbally. We often just choose not to listen.

A talking lion would still be a lion. He wouldn't be able to tell us directly what life on the fields is like, but that's what the researchers are for.

I would not continue the 17th century delusion of non-sentient animals any further than we absolutely have to. Animals have exceedingly rich inner lives, and so do humans that don't have language. They just lack certain tooling. I'd put not having language on par with not having sight. A handicap for sure, but you're still a person and you still have thoughts.

There is, literally, a part of the brain that if you shut it off, you lose language. Dennett would have you believe that it's that one part that makes us human.


I think Dennett responded too narrowly to the question and thus misreprested his own point of view. "A lion that talks like a human" is not a well formed thought experiment, since no real lion could talk like a human. A lion that that does wouldn't be like any lion we've ever seen. And if Koko could talk like a human, she too would no longer be like any gorilla we've seen. The main point of limited consciousness in lions is lost in an implausible fantastic model of a superintelligent lion that doesn't exist and never could. There's no point in extending that (lousy) line of reasoning to gorillas too.

Through our experience communicating with Koko and with higher primates like chimps, we've come to understand how much 'more conscious' a chimp is than a gorilla, and how we can better assess consciousness without using language. We've seen that the chimp is aware of more of the world than is a gorilla, both inner and outer worlds -- the content of one's own mind and the minds of others.

This really isn't so different from what Dennett espouses, that consciousness is a continuum that begins with a lowly few-cell organism that's aware of nothing in the universe more than a sugar gradient, and ends with another organism (man?) that's attuned to the loftiest truths and actively strives to achieve ideal/complete consciousness, or Satori.

I suspect that on reflection, Dennett would not defend his own "lion-ness" argument for the same reasons you suggest. Koko is an existence proof to the contrary.


> We taught Koko the gorilla how to talk

I think you're overstating your case. "although the gorilla learned a large number of signs she never understood grammar or symbolic speech" - https://en.wikipedia.org/wiki/Great_ape_language#Koko

> humans that don't have language

Which humans are those? Language arises pretty spontaneously among humans even if they're deaf or blind - https://en.wikipedia.org/wiki/Nicaraguan_Sign_Language

Yet no amount of exposure will make a dog or ape understand language.

> There is, literally, a part of the brain that if you shut it off, you lose language.

You mean the entire left hemisphere?


> There is, literally, a part of the brain that if you shut it off, you lose language.

> > You mean the entire left hemisphere?

probably not that, that's overkill. there are smaller, more specific areas in the brain directly associated with language faculties. two prominent examples are Broca's Area and Wernicke's Area which are both involved in speech comprehension (but not in control of the physiological apparatus for forming speech sounds).


Um, Wikipedia says Broca's area is involved in speech production. If you just mean the specific details of working the lips, vocal chords, etc, wouldn't that be the cerebellum's job?

https://en.wikipedia.org/wiki/Broca%27s_area


yes, it is involved in the neurological processing of speech production w.r.t to grammar and syntax. that's a separate function from fine control of muscles used to produce speech sounds.


> A talking lion would still be a lion.

But it's not. Lions can't talk, almost by definition. A talking lion is necessarily a creature so different from a regular lion that it can't reliably tell you what being a regular lion is like.

It's like how smartphones aren't really phones, they're computers with telephony capabilities. Calling them phones is ridiculous when you really think about it, but we apply that label for historical reasons based on their evolution from early phones. Still, opening up a smartphone won't really tell you anything about how regular phones work.


Talking lions are just lions with trans-species communicating capabilities. Lion communicate with each other instinctively. They at the very least understand each other, whether we understand lions is a different point.


> Talking lions are just lions with trans-species communicating capabilities.

In other words, completely different brains than regular lions. So, not lions.


I agree with your position (and with Dennett's). "Talking" is not like an independent module with a lede you can attach to a lion's brain to get it to talk. It must be integrated with the lion's brain, and that necessarily changes the brain to the point that it's no longer a "lion." This is because even simple statements like "I am hungry" require the ability for sophisticated abstraction. "Talking" is more than mere vocalization, it's also the thought processes behind the vocalization. Once you've changed a brain so that it is capable of those sophisticated abstractions, it's no longer the same.


Is a person missing half a brain, in a vegetable state with little hope for recovery not human?


Except they're missing half a human brain. A talking lion would not have a lion brain.


In the context of, say, figuring out what makes humans act the way they do: most certainly not.


Your quote doesn't support your conclusions about the subject or Dennett's views on them.

A lack of sight from birth results in visual cortex used for other things. Damage to language areas can result in loss of comprehension or production, but neighboring tissue can recover the functionality. The lines we draw are fairly blurry to be honest.

Dennett isn't saying that the language area makes the human but that language and narrative as a cognitive currency are what make us human; the neurobiology is incidental.


I don't think Dennett is denying that animals have rich inner lives, or that they can express basic desires to us. He's saying that their conscious experience is somehow fundamentally incomprehensible to us.


I'm not so sure. Maybe for an octopus, but as mammals we have fundamentally similar brains and neurology to many other mammals.

When I quiet my mind, look around me passively and simply experience the input of my senses, I can easily suppose that the experience I have in those moments is very similar to the experience of a dog or a chimp in a similar mental state. If I'm not using the higher functions of my brain such as language and only using those functions I share with other mammals, I don't see why I should expect my experience of them to be radically different.

Similarly, many mammals exhibit emotions and desires. We have those too and we evolved them from common ancestors. If a common ancestor of chimps and humans evolved these behaviors and we and chimps exhibit them, why should we necessarily expect the experience of them to be utterly and incomprehensibly different? Especially when many of the resulting behaviors in chimps and humans are so closely analogous and presumably also closely analogous to the behavior of our common ancestor? Surely any claim that we should expect them to be completely different or incomprehensible is the one that is extraordinary and needs to justify itself?


That quote, and its context, is not an argument that animals do not have exceedingly rich inner lives, nor is it an argument that language is some prerequisite for such a state. It is an effort to address, and dismiss, the kind of "17th century delusions" that you're worried about (and, in fact, participating in, whether intentionally or not).

The statements, "Animals are sentient", and, "Animals are not sentient", are opposite positions extrapolated from the same axiomatic premises; that there is some thing called 'sentience', that humans 'have' this 'sentience', and animals either do or do not. Most commonly this kind of thinking also assumes that there is one kind of thing called sentience, (often referred to in religions as a soul). Your phrase, "lack certain tooling", is in line with this logic; it implies that there is some absolute definition, some core kernel of "inner experience" that is separable from ancillary layers that it may leverage. This is analogous to the statement that a computer may be running an operating system, whether it has a keyboard and speakers or not.

What Dennett argues, here and in general, is that this definition of sentience is not nearly holistic enough. That there is no one objective thing called 'sentience'. Instead, there are a bunch of different things; being a lion, being a bat, being a human. Each of these is a model of experience in and of itself, and is defined in and of itself; you can't factor out a common set of experiences or functions and expect them to translate. Part of being a human is being able to, amongst other things, see, hear, taste, and speak. These are not tools built on top of some common 'sentience interface'[1]. To continue (dangerous and leaky) software metaphors, it's much more like a spaghetti-coded monolith[2]; the features -are- the system, and their implementation is part of a feedback loop into the deepest parts of the system, and all the way back out again. The tools aren't 'used by' the 'sentience'; they _are_ the sentience, and the sentience is the tools. A practical demonstration to consider here is something like synesthesia, where what might superficially seem to be well-defined verticals are crossed and interwoven.

And so, when he (extending Wittgenstein) says, "If a lion could talk, we’d understand him just fine. He just wouldn’t help us understand anything about lions.”, he's saying that if such a thing as a talking lion existed, it wouldn't help us understand lions because talking lions and non-talking lions have different experiences, in part because they have different capabilities, different available tools, and thus different architectures. This isn't to say that they don't each have some rich inner life. Just that these inner lives are not mutually comprehensible, shared, or even compatible. In fact, the article, further down, pretty much says just this, albeit in many fewer words:

“If you think there’s a fixed meaning of the word ‘consciousness,’ and we’re searching for that, then you’re already making a mistake,”

---

[1] This line of thinking is very close to the "17th century delusion" of something like a homunculus. It is made more palatable by using a word like 'soul', however

[2] or, indeed, something like a neural network; the weights 'mean' something inside the system, but they're all relatively defined, not absolutely; you can't necessarily look inside the system and expect to understand what individual components 'mean' without addressing their entirely relative context.


That last sentence seems to be true as far as it goes, but I agree with the criticisms that it doesn't add much to the debate.

Suggesting that "consciousness" is some kind of smooth continuum is just plain wrong. Empirically, it's more like a series of discrete abilities that kick in as more and more sophisticated neurological processes become available. (We don't really know that for sure, and Chalmers and the animists/dualists may turn out to be right. But that kind of dualism is very fuzzy, and if you're trying to build intelligent systems it's hard to do anything useful with it.)

One tell-tale is the mirror test. Some animals understand they are the creature in the mirror, while others see a different animal - and respond accordingly, driven by instinct.

That's a binary test. You can't "sort of" see a reflection of yourself.

It's true you can get it wrong some of the time - as a human, you can be tired or confused or drunk or ill or simply in a poorly lit environment.

But the ability to make the distinction, assuming you're functioning normally, is either there or it isn't. You either have the self-abstractive processing needed to operate with a self symbol and to understand that you're experiencing it in the mirror, or you don't. And if you don't, you never will.

There are many other possible binary tests, and consciousness seems to exist as a sum total of the pass/fail profile for all of them.

That's why the lion question is irrelevant - it doesn't go far enough to make this point. It assumes - on the basis of no neurological theory - that being able to talk would transform the instinctual and perceptual mechanisms built into lions so completely they'd become irrelevant, and you'd have something that looked like a lion but thought like a philosopher and was no longer even slightly interested in chasing after antelopes.

This is clearly wrong, because being able to talk hasn't made us stop feeling and acting like primates, or - often - even dumber animals.

With those kinds of details in mind, a multidimensional consciousness scale could be defined quite precisely.

But Dennett's fuzzy argument about being sort-of conscious and sort-of not is - I think - hand-wavey vagueness. It lacks the precision needed to do that usefully.


I totally agree (not any kind of expert). Even babies don't talk for a couple of years, but they are very much alive and in fact some psychology says that most of personality is baked by about 7 years old. Then most of it is must be determined without words or language, rather by memories & connected emotional experiences. Not hard to imagine that animals might experience the world similarly to babies via instincts & emotions.


Consistency is really key. As long as there is consistency in thought and action, there is a level of understanding, whether innate or consciously. Language has nothing to do with consistency except for the outward showing of it to others. And it may possibly help with self-consistency..we cannot assert that yet.


There is, literally, a part of the brain that if you shut it off, you lose language. Dennett would have you believe that it's that one part that makes us human.

That's an interesting position for him, considering (IIRC) his co-author Douglas Hofstadter (The Mind's I) has a sister who is a perfectly normal & functional adult except for her complete incapacity for language.


To be fair, that is a ludicrous misrepresentation of Dennett's position.


Interesting:

"Our folks’ third and last child, Molly, born in Palo Alto, was, sadly, not what anybody had thought. By four or so, Molly was visibly abnormal — not saying any words at all, nor absorbing any. It wasn’t autism; it was a profound brain malfunction, probably dating from birth or prior to birth, but what was wrong, nobody could say — no diagnosis. Molly just didn’t pick up any words, who knows why, and our Mom and Dad had such anguish for so long on Molly’s account, as did Laura and I. What bad luck."

https://prelectur.stanford.edu/lecturers/hofstadter/autolipo...

The second sentence of the quote does seem strongly opposed to what I know of Dennett's position.


What "incomplete incapacity for language" means? There is speech, hearing, reading, writing, signs language…

I can't imagine a "normal functioning" human that cannot communicate in any way with others. Surely there are things she can do to express herself and understand others?


And in the act of teaching Koko to talk - we changed Koko.


Koko did not talk. She imitated hand signs.

Her handlers were gorilla handlers, not linguists. They were willing to accept any combination of hand signs as communication, and they got to decide what was being "communicated".

The phenomenon was no different from dog owners who believe their dog understands English.


Yes! Highly recommend Carl Safina's Beyond Words.


I think my problem with Dennett boils down to the fact that he's a wet blanket =) His attitude would be fine if it really did seem like there was nothing more to be learned about consciousness.

To me his perspective seems a lot like our primitive ancestors looking at the stars and calling them holes poked in the sky--case closed. But you don't have to believe in a soul or anything like that to suspect that there's something significant about self-awareness that we haven't figured out yet. I feel like Dennett is the odd man out here. I suppose we'll have to wait and see if anyone can prove him wrong.


> His attitude would be fine if it really did seem like there was nothing more to be learned about consciousness.

On the contrary, he's just saying that conscious is not an irreducible phenomenon like some of his philosophical contemporaries believe. There's still a lot to learn about the reduction. In fact, his is the less lazy way out, because how consciousness reduces to physical laws is completely unknown, but if you take consciousness as irreducible in some way, then you have nothing else to explain: consciousness just is.


Ah-- interesting. Then it's certainly possible I have him all wrong. I'll take another look. Thanks!


In the age of casual pseudoscience influencing the general public (homeopathy, the surge of meditation apps and methods, the supplement industry) then sure, Dennett is the odd man out. But from a hard science standpoint his is a hard philosophy to refute. No amount of "feeling" like there's some bigger or deeper meaning to it all changes the fact that consciousness is just a bunch of neurons firing in a biological computer. Complex, yes, but finite and physical nonetheless.


I think it's unfair of you to lump meditation with homeopathy. The latter is obviously pseudoscience but there are many studies backing the former's claims to reduce anxiety, depression, etc.


This is somewhat true, but the pseudoscience I was referring to in that case is that a lot of these apps and companies are marketing to a new wave of hipster types that think all natural foods and a hip new app can cure their depression when really an hour with a shrink and some reliable meds might be better for them...but no, these things are "unnatural" and "dangerous".


I've tried a couple of the more prominent ones (Headspace, 10% Happier) and I really did not find anything aimed particularly at the new age/hipster crowd. Maybe it's true of some of the rest.


Meditation apps. People are clearly getting rich selling "mindfulness" to people who think they need it.

Of course, what they really need to do is put their phone away and actually focus on, you know, nothing.


Mindfulness is not "focusing on nothing". Mindfulness is, if anything to "defocus". To let the mind flow and pay attention on what arises without getting caught up in it.

One of the favourite analogies I've seen uses is that of a series of boats racing down the river. Normally you're on one, and you might move from boat to boat. Mindfulness meditation is stepping off onto the shore and paying attention to the boats racing past.

"Focusing on nothing" is concentration practice. You need a degree of concentration practice to calm your mind enough for mindfulness meditation, but mindfulness meditation itself is not a concentration practice, but an insight pratice.


I think a great many people could benefit from mindfulness. Especially those who think they don't need it.


"If it’s easy for you to imagine a conscious robot, then you probably side with Dennett."

The Zero shot translation for google translate is system generated language that a computer created itself. If we were to extrapolate that to a computer that can do the creation of sometjing and go about its own path of creation that is to some extent random - can't we create some form of conscious...? I side with Daniel - consciousness is a muscle the brain.


The sword of not knowing cuts two ways. The so called hard problem of consciousness[1], where does "qualia" come from. We don't actually know if it is hard at all.

Maybe qualia arises trivially.

[1] https://en.wikipedia.org/wiki/Hard_problem_of_consciousness


My take on zombies is that I don't think it's possible to construct a system that exhibits all the characteristics and behaviours of a mind without constructing a mind.


I go in circles in this. Yes - that argument seems watertight but I cannot make the 'hard problem' go away. The one thing Descartes got right is that there is a single existence proof for qualia. All the difficulties we have in defining them or either logically or experimentally finding evidence for their existence is irrelevant.

Solipsism strikes me as a more defensible position than consciousness as an emergent phenomenon in a purely materialist universe.


First problem: philosophers are not divided into two camps, so Rothman is misinformed. Aristotelianism is another. But let's return to these "two camps" for a moment. Typically, these two purported camps are some variation of dualism and elimnativism. Both ultimately rest on the Cartesian metaphysical legacy.

Dualism comes in a variety of forms. Examples include property dualism (Searle, though he denies it) and panpsychism (Chalmers, it seems). Regardless of the version expounded, all such dualisms divorce the mental from the physical, hence the dead horses that are the so-called mind-body problem or the problem of qualia. On this view of matter, we can't account for things like color because, by definition, matter does not have color in the way that we commonly understand it as having color. Physical theories, instead of explaining color, redefine color in other terms. The only place left to locate color, as it is commonly understood, is the mind that's been so brutally split off from matter. Fun fact: Cartesian dualism is not a "religious residue". It is a philosophical position. In its lingering incarnation, we can credit Descartes through whose work it bears an interesting relation to the development of modern empirical science. Prior to Cartesianism, Aristotelian views were dominant. Indeed, while the Roman Catholic Church does not have an "official" metaphysical doctrine (metaphysics is philosophical, not theological or doctrinal), the preferred metaphysical theoretical apparatus has been Aristotelian since Aquinas. No such dualism exists in Aristotle or Aquinas.

On the other hand, eliminativism manages to take an even whackier view of things than dualism. Whereas dualism has painted itself into a corner by refusing to reexamine its suppositions, eliminativism "resolves" the problem by shutting its eyes, that is, by denying the existence of those things it must explain. As a result, it is an incoherent position.

What's interesting is that Dennett does distinguish between "function" and "intention", though I'm not entirely sure how he reconciles (if he does at all) all of these with materialism. The reason I draw attention to this point is that teleology/final causality is frequently misunderstood owing to a confusion between conscious intent and function. When an Aristotelian talks about the "purpose" of an organ, he has in mind what the organ is ordered and organized toward, not what intelligent design theorists would describe as "design". (Interestingly, ID theories are also rooted in Cartesian ideas about matter and thus need to appeal to imposed, extrinsic divine intent to locate and explain the function of things like organs. Their scientistic (not scientific) opponents take the eliminativist approach and deny that organs have functions at all because they hold to the same concept of matter as ID theories while rejecting that divine intent exists. On the other hand, Aristotelianism maintains the intrinsic finality of things like organs and thus does not need to appeal to some externally imposed divine intent to explain function as such.)


One of the best papers I've read on cartesian duality was by Vaughan Pratt[1] on Chu spaces. It's a little bit of a slog for those not conversant in foundations, but it does help ground the conversation in terms that are more rigorous.

As an aside, Chu spaces also provide a semantics for linear logic and are useful in understanding concurrency.

[1]: http://boole.stanford.edu/pub/ratmech.pdf


> On the other hand, eliminativism manages to take an even whackier view of things than dualism. Whereas dualism has painted itself into a corner by refusing to reexamine its suppositions, eliminativism "resolves" the problem by shutting its eyes, that is, by denying the existence of those things it must explain. As a result, it is an incoherent position.

Not even remotely. I'm not even sure how you could possibly form such a conclusion, except by some deep misunderstanding of all of the various materialist arguments that discuss qualia. Dennett is himself a materialist by the way.

Materialism can classify qualia as an illusion while still recognizing that some reduction of qualia to physical laws is needed. This reduction would be left to the domain of science though, because that's where it would belong.


I've read a few Dennett books and watched many of his talks, and imo the main problem with him is his and his fans' motte & bailey style of arguing. He spends inordinate amounts of time heavily suggesting that consciousness is an ephemeral nothing that we're not just ignorant about, but mistaken about. But when pressed enough, then he retreats to a just barely reasonable stance that's closer to Chalmers than the stances he spends most of his time communicating. So I can't quite disagree with him, because after 20 years I still don't exactly know what he thinks.


> He spends inordinate amounts of time heavily suggesting that consciousness is an ephemeral nothing that we're not just ignorant about, but mistaken about.

That's right, it's an illusion, in the sense that we perceive ourselves to have some irreducible subjectivity, but we actually don't.

> But when pressed enough, then he retreats to a just barely reasonable stance that's closer to Chalmers than the stances he spends most of his time communicating.

I'd like to see a citation for that, because as far as I've seen, Dennett hasn't changed his stance on this subject in over 20 years, and he's been very straightforward about his position in every interview I've watched or read.

And what he thinks is quite clear: he's a straight up materialist. Qualia don't actually exist as some irreducible phenomenon and consciousness is an illusion. He's published multiple books on the subject arguing for this position quite forcefully.


>Materialism can classify qualia as an illusion

But can it? To say something is an illusion is to say that is really an experience of something else. And qualia are qualitative experiences, so the claim is that certain experiences are not experiences… which is problematic. Of course, a proper argument needs to be fleshed out more than that, but eliminativists face a real difficulty, unless they broaden the definition of materialism (which would take us closer to Aristotle).


Adding to your post, let's consider an example, i.e., "redness" as it is commonly understood. Dualists will relegate "redness" to an experience because, to them, matter is devoid of things like "redness" by virtue of the dualist's presupposed concept of matter. The dualist still have to deal with a number of problems stemming from his position, e.g., the interaction problem, but he can at least ostensibly locate "redness" in reality, viz., the Cartesian mind. Even if it is an illusion, it exists as an illusion in the mind. Materialists, who typically dispense with the Cartesian immaterial mind but stick with a broadly Cartesian concept of matter, either live in the vain hope that they can eventually locate "redness" somewhere in matter, or come to the absurd eliminativist conclusion that "redness" simply doesn't exist or that it is an illusion. Of course, if it is an illusion, then it still exists as an illusion, hence the incoherence of eliminativist materialism. Aristotelianism accepts a richer view of matter in which "redness" does exist, so there is no need to posit this bizarre and unbridgeable division between physical things and immaterial mental qualium.

"Selves" and zombies also crop up in these conversations, but they are neither here nor there. We're talking about the existence of things like "redness". Talk of "selves" is no doubt related to the Cartesian identification of mind and self, but something that is entirely irrelevant to the question at hand.


> Materialists, who typically dispense with the Cartesian immaterial mind but stick with a broadly Cartesian concept of matter, either live in the vain hope that they can eventually locate "redness" somewhere in matter, or come to the absurd eliminativist conclusion that "redness" simply doesn't exist or that it is an illusion.

Funny how you keep calling materialism "absurd" and "incoherent", yet provide no coherent argument of your own to support this position. If anyone unfamiliar with this subject is reading this thread, rest assured that the anti-materialist sentiments espoused here are a minority view. A recent survey of academic philosophers found that the majority support a materialist philosophy of mind, so frankly, these charges of incoherency and absurdity don't pass a lay person's basic sniff test.

As for the existence of "redness" specifically, I can easily point out how the various thought experiments that allegedly support the existence of redness are fallacious. So instead of making further bold claims, would you care to present such an argument for scrutiny?

> Of course, if it is an illusion, then it still exists as an illusion, hence the incoherence of eliminativist materialism.

A car is also an illusion under materialism. But clearly I drove something to work this morning. So does this apparent incongruity entail some incoherency in materialism? Or is the problem really that you're attacking a straw man?


> But can it? To say something is an illusion is to say that is really an experience of something else.

Your definition for "illusion" begs the question by simply assuming a subject is needed, ie. an "I" must experience an illusion. Rather, an illusion is simply describing the relation between perception and truth. If a perception, taken at face value, entails a false conclusion, then it's an illusion.

Even the basic dictionary definitions of illusion make no reference to a subject. They're all of the form of "a false idea or belief", or "a deceptive appearance or impression". Beliefs and appearances are attributes we can ascribe to mechanistic systems too, like computers, which can have sensors plugged into Bayesian inference engines that can infer false "beliefs".

So requiring a subject is a property that you have imposed on the meaning of illusion, it's not intrinsic to it.


I made no reference to a subject either. For "my" experiences to be not really "anybody's" experiences is a somewhat separate problem. As is a belief that I don't really have any beliefs, and so on. Any attempt to eliminate qualia or self or intentionality, etc. as being fundamental realities runs into the problem that all these things are more fundamental to my understanding than any proposed alternative.


Your definition of illusion still begs the question, either by assuming a subject or assuming a reduction to further experience is needed. Like I said, illusion only requires that perception differ from truth.

> Any attempt to eliminate qualia or self or intentionality, etc. as being fundamental realities runs into the problem that all these things are more fundamental to my understanding than any proposed alternative.

1. Perception is fundamental to understanding. Whether experience is fundamental is very questionable.

2. Being fundamental doesn't entail something is irreducible. Being in a car is fundamental to driving on a road, that doesn't entail cars are irreducible.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: