I quite like the paper. I once wrote a follow-up paper about how one might interpret it now that we know about neuroplasticity. That was fun to ponder.
I looked at a specific experiment where researchers managed to map visual impulses from a camera to an electrode "pixel grid" physically placed on the tongue, such that the visual inputs were "mapped" to corresponding locations on your tongue. It was shown, in the research, that people could catch balls thrown at them blindfolded using this setup, and after a relatively short period of sensory adjustment. They would actually "see with their tongue".
You can read about that research in this 2003 article from Discover Magazine:
The Nagel question then, is the fact that I experience vision a certain way really an essential aspect of "what it's like to be me?" Or is my consciousness a "further fact"... something running in the background of all my senses, no matter how they are wired (or rewired) over time? This is sometimes referred to as the "hard problem" of consciousness, summarized nicely in Wikipedia here:
Let's say you undergo a process to slowly use your brain's plasticity to get closer and closer to perceiving as a bat. Well, does that help me? Am I able to imagine what it is like to be a bat? Of course not. Well, what if you could talk to me and describe it, then would I have a true understanding of the experience of being a bat? Still no!
Nagel's point isn't about whether we could do anything to make ourselves into bats, it's rather that, when we don't have that qualitative experience, we cannot truly understand what it is like to have that experience.
If anything, the argument works even better now that you've gone and given yourself sonar: you can talk to me, unlike that dumb bat, so you can describe your experiences to me, and yet I will still have no true understanding of the experience itself.
That is (a part of) the Hard Problem of consciousness: that my studying and examining and learning about your conscious experience still does not allow me to understand it.
This underlines the suggestion at the very end, we are lacking tools to properly talk about experiences and maybe we could actually develop them. My inability to describe to you, or probably even better to a blind person, what it is like to see a red object may after all not be fundamental. Maybe there is a way to communicate and in turn truly understand what using the tongue device is like, we just have not yet discovered it.
This might all sound a little floofy, but it's really not. Think about the deep knowledge conveyed through game simulations. I've never been to space, but thanks to hours and hours of playing Kerbal Space Program, I really do have some sense of what it's like to navigate through space — a knowledge that goes far deeper than if I had only learned all of Kepler's and Newton's laws and equations.
Why is it that there's something it's like to be zukzuk? zukzuk could simply be like a computer program, simply responding to inputs. Or you could simply have an erroneous memory of actually experiencing things (instead, you simply just responded to inputs, one of which was to record an apparently subjective experience). That doesn't seem to solve the problem to me; it's just kicking the can down the road a little.
For example, I know how to ride a bike, and that gives me some idea of _what it is like_ to be someone who knows how to ride a bike — an insight I certainly wouldn't have if I only understood the mechanics of bike riding in declarative terms. The fact that I will probably never know what it is like to be a bat has a lot to do with my inability to have the know-how that a bat has (it's very hard to know how to fly and echolocate when you don't have wings nor the appropriate sensory organs). However, I'd argue that an active, deep VR simulation of bat flight might bring you just a little bit closer to having that embodied, subjective knowledge, in a way that studying bat behaviour or anatomy from a purely objective perspective never would.
I can't say it sounds plausible.
(It could be that they're different locations in your brain that have to be poked than the locations it would take in the sender's brain, but it's not a fundamental obstacle. Either the sender studies your brain to tell you where you should do it, or instructs you how to figure out the right spots.)
What if someone was a bat, turned into a human and retained the memories of being a bat?
'''Or is my consciousness a "further fact"... something running in the background of all my senses, no matter how they are wired (or rewired) over time?''' Your senses are the very faculties that allow the world to world to produce your conscious experience, so it is odd to frame the "hard problem" of consciousness in this way.
In general, neuroplasticity is sort of interesting for the understanding of consciousness, but it seems arguable that in the tongue experiment you are just experiencing the same qualia you have always experienced; it isn't like seeing colors for the first time.
EDIT: Did not find the specific study I was thinking of but this is known as Molyneux's problem  and there are several sources mentioned in the article.
It ended up on HN because philosophy is what I studied in college/grad school, and I like the fact that HN seems to be a very intellectually omnivorous community -- sans the annoying dilettantism you find elsewhere.
1. On what grounds can we make statements about what ought to be based on what is?
2. On what grounds can we make statements about what it is to be something based on what it is like to think about the experiences of that thing?
Edit: In case there's any confusion, I called it the is like-be problem to draw a similarity to Hume, not because anybody else calls it that.
If you wanna link this up to an older philosopher, this is basically what Kant's on about in his Critique of Pure Reason. Our special human brain stuff is responsible for providing us with the faculties (e.g. understanding of things like causation) we have for understanding the world. We are always trapped in our experience of the world, and though that shouldn't preclude us from experiencing the world, you can't separate how we experience objective reality from the fact that we experience it as humans, with human brains and faculties.
As a nit, wasn't Kant's Critique of Pure Reason written in part as a response to Hume?
"David Hume, who among all philosophers came closest to this problem, still did not conceive of it anywhere near determinately enough and in its universality ... believing himself to have brought out that such an a priori proposition is entirely impossible, and according to his inferences everything that we call metaphysics would come down to a mere delusion of an alleged insight of reason into that which has in fact merely been borrowed from experience and from habit has taken on the appearance of necessity."
The "problem" Kant is referring to is how metaphysics can be possible in spite of what Hume proved. Kant says in the same breath, that this is the central problem of the critique - so i guess you can nearly say that it's written almost entirely as a response to Hume.
"We may call this the subjective character of experience. It is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence."
All of today's science is logically compatible with the absence of subjective experience, because science cannot even define what subjective experience is.
Every "definition" of subjective experience (SE, in what follows), is actually an ostension: the reader's attention is directed to his own SE. For example:
- Perhaps nothing actually exists -- except SE. That's the one thing that MUST exist. [Descartes]
- "There is such a thing, as what it's like to be..." [Nagel, in the posted paper]
- There's such a thing, as what seeing red is like: a quale.
- Science can't rule out that a behaviorally perfect duplicate of you, could lack SE (i.e. be a zombie).
Science can't rule it out, but maybe SE does: why would a zombie vociferously assert the existence of SE?
Those who claim SE is an illusion, i.e. that you actually are a zombie but don't know it,
have the burden of explaining why zombies claim to have SE.
Not entirely true. Science is made up of language (syntax, meaning etc) and is also observer-relative.
That just means science is incomplete and we are ignorant, not that subjective experience is metaphysically different from all other observed phenomena.
Quoting from The Last Word (1997):
"[...] I don't want here to be a God; I don't want the universe to be like that. My guess is that this cosmic authority problem is not a rare condition and that it is responsible for much of the scientism and reductionism of our time. One of the tendencies it supports in the ludicrous overuse of evolutionary biology to explain everything about life, including everything about the human mind."
Of course his book Mind & Cosmos is the best example of this
On the other hand dualism refitted for the physicalist world seems like a hack at best. Panpsychism fails the inference to the best explanation or Occam's razor etc.
"you can form a propositional belief that "Consciousness is without effect", and not see any contradiction at first, if you don't realize that talking about consciousness is an effect of being conscious. But once you see the connection from the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how philosophers write papers about consciousness, zombie-ism stops being intuitive and starts requiring you to postulate strange things."
For further reading consider LessWrong's Zombies sequence which tackles Chalmer's arguments and property dualism in general https://wiki.lesswrong.com/wiki/Zombies_(sequence)
From my perspective WRT occam's razor, it looks quite different. If panpsychism is right, there is only one kind of matter. For panpsychism to be wrong, there has to be two different kinds of matter (conscious and unconscious), along with a mechanism to transition between the two states that is mediated by a physical process. Furthermore, despite the transition being mediated by a physical process, the difference between these two states is something undetectable to current science.
Beyond that, if you assume consciousness is the result of some physical characteristic of brains (which materialists usually do), you have to realize that most of the biochemistry going on in brains is going on all over the body. Do my organs and other body parts have their own consciousness? Furthermore, our experience of consciousness doesn't even extend to the entire brain. What is different between non-conscious parts of the brain and conscious parts? Physically the neurons themselves are basically identical. If it is the way that they're connected, those connections just change the simple physical forces acting on the particles of the brain. You again have to answer the question of how simple physical forces convert matter from unconscious to conscious.
I think you have it mixed up. Panpsychism is the dualist belief, although it differs than supernatural dualism it's still dualism. It says that matter is singular but had two properties the mental and the physical.
Whereas materialism is monist all the way through. You raise good questions and science and philosophy are still working on it but it doesn't require any extra properties or matter and agrees with scientific evidence therefore simpler.
Let's assume that experience is irreducible, and you accept your own consciousness (which I hope you do!). That seems to leave a few possibilities:
1.) All normal matter is conscious
2.) Some normal matter is conscious - and we're back to the problems I previously listed
3.) Consciousness is matter/energy of a kind physics has completely missed somehow
From my perspective, #1 is the simplest, #2 is complicated, and #3 seems implausible given the fact that we've so precisely probed reality down to the sub-atomic scale.
Also, we should be careful about being overly reliant on scientific evidence. The only evidence for the existence of consciousness is your own subjective experience of it. There is no objective evidence that consciousness exists. It would seem to me that we should at least be able to prove the existence of consciousness before we make the claim that scientific reductionism can explain it.
Or take for example the fact that a few hundred years they couldn't figure out fire so they made up a new property (not too unlike pansychism):
"In general, substances that burned in air were said to be rich in phlogiston; the fact that combustion soon ceased in an enclosed space was taken as clear-cut evidence that air had the capacity to absorb only a finite amount of phlogiston. When air had become completely phlogisticated it would no longer serve to support combustion of any material, nor would a metal heated in it yield a calx; nor could phlogisticated air support life. Breathing was thought to take phlogiston out of the body"
The points I'm trying to make:
1. Emergent properties are real. Just saying that reducing something makes it hard to explain doesn't make that thing a magical thing.
2. History shows that it's kind of silly to invent new type of matter or properties when we fail to explain something.
We have this black box that we barely know much about inside our skulls. It could be an amazing machine that generates consciousness yet before trying to reverse engineer it some of us likes to say "it's actually not that thing, it's something else that has the magic".
Chalmers position is property dualism where certain informationally rich systems have conscious properties in addition to physical ones.
Anyway, materialism is a monism, like Thales saying the world is made of water. There's nothing that necessitates the world be a monism. Maybe it is and maybe it isn't.
Panpsychism is another dualistic view of the natural world than requires no supernatural component. Everything has a mental as well as material aspect.
And, of all the phenomena that might be identifiable as emerging from the fabric of reality, the one that distinguishes you (and other systems you recognize as sufficiently like you) from everything else just happens to be the one you've decided must objectively be set apart.
I'll ask again: in this framework, what remains "supernatural", and why can you reject it?
Anyway, the question of what fundamentally exists is an ontological question. Materialism is one possible answer. All that fundamentally exists is material objects, and everything else, including mind, society, abstract concepts, etc emerge from that material substrate.
Idealism would be the opposite answer. Everything is mental, and material objects are just ideas presented to the mind.
Everything is mathematical would be another possibility. Or information, perhaps qubits (it from quantum bits).
But maybe aspects of mind and matter are both fundamental to what exists. Or maybe it's some third neutral substance that's neither.
But whatever the case, none of that need involve the supernatural. It can, if one is disposed to think that way. One can say that God created the material world (granted, that leaves God as the something else, but at least the world gets to be entirely material).
There is no good reason to believe that any ontology you can come up with to help you describe and attempt to comprehend reality, on the time and energy scales you live on, has any basis in the fundamental nature of reality.
The only distinction between atoms and phlogistons, physiology and the theory of humors, is their explanatory and predictive power, but none of those is any more "real" than the others beyond how they help us symbolize the world in a way we can reason about.
There is no good reason to believe that consciousness is any different, other than unexamined chauvinism, or, at best, wishful thinking.
Reality doesn't give a fuck what circles you draw around some fuzzy densities in an energy field, and what labels you write on them. Cars and sandwiches, atoms and quarks, where you end and the air around you begins, these distinctions that only matter to "you". This is the tyranny of the dichotomous mind.
The amount of unjustifiable, and, on the scale of the universe, unearned arrogance that it takes to assert that you or anything about you is set apart from the rest of material reality, is staggering, and just not something that I'm capable of sharing, having seriously considered the alternative.
Honestly, I'm just not sure we can have this conversation unless you can assure me that you have fully considered this to the point of having to pull yourself out of a serious existential crisis at some point in your life.
Any platitudes that come down to "there must be something that objectively privileges the nature of our existence above that of a rock or an endless field of eternal cold emptiness" are just totally baseless, and fail every test that might distinguish it from "god has a plan for me", "love is a physical force that directly manipulates time and space", or "it's turtles all the way down".
But then to circle back around and privilege materialism as being somehow distinct from all the other ontologies humans have argued for? Are you skeptic or are you dogmatic materialist?
Pick your side. Just so you know, equating every other ontology as the equivalent of "God having a plan for my life" is piss poor philosophy, and just ignorant.
Or, rather, always does that, like all of us do all the time, because that's what permits us to continue to live and think in the conditions and on the scale we find ourselves inhabiting, because that's the basis on which our minds work. But does not attempt to claim that any of the distinctions that happen to be useful to me have any fundamental basis in the raw nature of reality.
"Materialism" of this type is the default position in the same way atheism is the default position. The burden of proof is on the claim that there's something beyond material reality, and both dualism and theism are making this claim. It is the same claim. The only difference is the details of exactly what the supernatural thing is.
That is something there is not and cannot possibly be any evidence for, beyond "just take my word for it". In that way, dualism is indistinguishable from the existence of god or gods or magic or reincarnation or any other supernatural thing. To permit the possibility of one is to permit the possibility of the other. And there is no more evidence of one than the other, nor can there be. Again, there is no reason besides chauvinism or wishful thinking to believe in a theory of reality that makes you or something about you special.
I'm not going to extend the slightest benefit of the doubt to any theory of reality which says "I, and people/things/systems I recognize as like me, am intrinsically special and set permanently apart because something about me is fundamentally different from everything else", which dualism always comes down to. Any more than I'd be inclined to trust the objective truth of any theory of politics or economics or anthropology which says "I, and people/things/systems I recognize as like me, am intrinsically special and set permanently apart because something about me is fundamentally different from everything else".
EDIT: I am definitely, full-heartedly, whatever Daniel Dennett is.
What exactly is the distinction you're trying to make between "non-physical" and "supernatural"?
What basis is left to make that distinction on, once you've abandoned the material realm?
What distinguishes your "plane of consciousness" from "heaven" or "the astral projection spirit crystal healing field" or "the power of positive feelings"?
I do agree that self-awareness and intelligence exist along a spectrum, along which we are apparently the farthest point.
I'm not saying that we are the only "conscious" systems, or that "consciousness" is something set apart from the rest of reality. I'm saying the exact opposite of that.
Your point (at least, as I'm reading it) is exactly what I was getting at with "and other systems you recognize as sufficiently like you", which I did not intend to be chemical-biology-centric.
What I am arguing against is "dualism", and its many hidden forms and names (including the one goatlover is espousing). Which is the idea that "consciousness" is some fundamental property of nature, set apart from (really: privileged above) the rest, that exists on some plane beyond material reality.
Which, I argue, there is no good reason, besides pure unexamined chauvinism, or wishful thinking at best, to believe
This is what you are also saying, no?
If I had to label myself I would say I'm a monist, in that I don't think mind and matter are separable or independent. I lean towards idealism in the sense that I suspect the set of possible states of consciousness is a superset of the set of states of matter that are possible according to the current "laws" of physics. Don't read too much into those labels though, that's only a first order approximation of my view.
Most Buddhists are atheists but not materialists. And although materialism and atheism are commonly conflated, they are very different issues, particularly considering non-abrahamic religions or philosophy in the technical sense.
Imagine a room full of philosophers and scientists arguing vociferously to determine where (or even if) to draw the line between conscious/unconscious organisms. (E.g., fish=probably, bacteria, probably not?)
This - and arguments over the definition of "is" - were the reason I stopped reading philosophy of mind papers.
The idea that there is objectively such a distinction is 100% reducible to any other attempt to assert that some definitional dichotomy "actually exists". It's exactly isomorphic to any other argument about taste. What is a sandwich, what is a car. An argument about whether a fetish is hot, between a person who has it and a person who doesn't. It's all the same damn thing.
Shit like that is why I hadn't stop taking philosophy courses in university. It was just rehashing what always came down to this exact same essentially platonist theory of forms bullshit, over and over and over.
If Nagel things materialists can't explain consciousness Dennett thinks they can. E.g.
"The obvious answer to the question of whether animals have selves is that they sort of have them. [Dennett] loves the phrase 'sort of.' Picture the brain, he often says, as a collection of subsystems that 'sort of' know, think, decide, and feel. These layers build up, incrementally, to the real thing. Animals have fewer mental layers than people—in particular, they lack language, which Dennett believes endows human mental life with its complexity and texture—but this doesn’t make them zombies. It just means that they 'sort of' have consciousness, as measured by human standards." Joshua Rothman, New Yorker, MARCH 27, 2017 - http://www.newyorker.com/magazine/2017/03/27/daniel-dennetts...
More detailed counterargument by Dennett: https://www.amazon.com/DARWINS-DANGEROUS-IDEA-EVOLUTION-MEAN...
I do not think he makes this statement. He seems perfectly open to the possibility of explaining subjective experiences in physical terms but he is convinced that we are at least very far away from being able to do it. In consequence he obviously considers all current attempts flawed and lacking.
"It is impossible to exclude the phenomenological features of experience from a reduction in the same way that one excludes the phenomenal features of an ordinary substance from a physical or chemical reduction of it--namely, by explaining them as effects on the minds of human observers. If physicalism is to be defended, the phenomenological features must themselves be given a physical account. But when we examine their subjective character it seems that such a result is impossible. The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view."
If we acknowledge that a physical theory of mind must account for the subjective character of experience, we must admit that no presently available conception gives us a clue how this could be done. The problem is unique. If mental processes are indeed physical processes, then there is something it is like, intrinsically, to undergo certain physical processes. What it is for such a thing to be the case remains a mystery.
I think the critical point in your quote is the last sentence.
The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view.
Sure, my experience of looking at a red object is fundamentally my experience, but there seems to be no obvious reason why we could not abstract me away and talk about the experience of an arbitrary human seeing a red object. This is also in line with the suggestions at the very end, trying to develop the tools to talk about experiences in an objective manner.
"The big mistake we’re making,” [Dennett] said, “is taking our congenial, shared understanding of what it’s like to be us, which we learn from novels and plays and talking to each other, and then applying it back down the animal kingdom. Wittgenstein”—he deepened his voice—“famously wrote, ‘If a lion could talk, we couldn’t understand him.’ But no! If a lion could talk, we’d understand him just fine. He just wouldn’t help us understand anything about lions.”
“Because he wouldn’t be a lion,” another researcher said.
“Right,” Dennett replied. “He would be so different from regular lions that he wouldn’t tell us what it’s like to be a lion. I think we should just get used to the fact that the human concepts we apply so comfortably in our everyday lives apply only sort of to animals.” He concluded, “The notorious zombie problem is just a philosopher’s fantasy. It’s not anything that we have to take seriously.”
I found that convincing, I'm curious if you do as well?
Anyhow. I did not read the entire Dennett article when it was posted here a few days ago, maybe I should, but it was just not compelling to me , at least as far as I got. What I got from the part I read is that he seems to do exactly what Nagel warns of, dismissing the experience of being a human. I find the comparison with a computer much more interesting than the comparison with animals. What if we build an artificial neural network resembling a human brain? If that is not good enough, what if we perform a molecular simulation of a brain? Or even a quantum physical simulation of a brain if molecules are still not good enough, but personally I doubt that.
But what if? Does this artificial brain experience what it is like to be a human? As a pysicalist I think the answer is yes. But just as Nagel says, I have no idea how this could possibly work, how the transistors in my computer could go from controlling the flow of electrons by mindlessly following physical laws to being aware of their existence in a universe, seeing red, feeling joy and pain. What if I replaced the computer with a mechanical one made out of billions and billions of cogwheels? With stones on a beach simulating a Turing machine? With a gigantic printed look-up table mapping all possible inputs to their outputs?
I can not think of any good reason why the stones on the beach - together with someone or something moving them around to perform the computation - should be any less conscious than the human brain they are simulating. And this seems of course absurd. Thinking about this is what gets me the closest to becoming a dualist or something like that. There seems to be not even the tiniest bit of hope at the horizon to even be able to attack this problem from a physicalist perspective. So when Dennett says that there is no problem, assuming he actually says this, then I must disagree.
 I had prior exposure to Dennett and, as far as I remember, quite liked what he had to say but somehow not this time. Maybe the topic was a different one, maybe it is just the way the article is written, maybe I should just read the entire thing.
P.S. I just did some more reading on Nagel, it seems you are at least more correct than me. He seems not as open to a physicalist account of consciousness as I thought but the details are hard to tell without actually reading more of his works.
I think two inverse arguments to the one you mention are more appealing - about getting close to a human brain with neural networks but not quite being there. First, and the New Yorker article actually mentions this at the end, but if you had a damaged human brain we could all clearly see that you are still human and conscious, just not exactly the way someone with a normal healthy brain is. Second, and this gets to not over-reducing the physical aspect of consciousness, say a Nobel prize winning physicist claims they have located physically in the brain where consciousness lives (I think this actually happened) you could quickly ask her, "so, if you take that puddle of neurons and other material where consciousness lives out of your brain and put it in a jar would you say "you" are now sitting in that jar experiencing what its like to be in a jar?" (also absurd).
As for rocks, that does sound absurd! :)
Really appreciate your thoughts.
It will take around 3,000,000,000,000 years and enormous beach to simulate one second of brain activity. It is literally unimaginable. What do you imagine when you talk about absurdity? Is it some small scale model, which is laid bare before your mind's eye in all its simplicity, leaving no place for consciousness to hide?
Another way of looking at it: the significance of any particular arrangement (or sequence of arrangements) of the stones is only meaningful in the mind of the entity that is moving them around. Or perhaps any nearby viewers with the patience and far-fetched ability to make sense of the iterations of stone arrangements. The internal/external distinction between the stones themselves and the stone movers/viewers seems critical to me.
Software on the other hand... that is a bit harder to categorically dismiss. I think I can imagine software that produces an experience somewhat analogous to the human one.
Now that is not just a pile of stones, but nothing of the added things seems to add much complexity. A robot pushing stones according to predetermined rules can be very simple. Even simpler than a Roomba would be a gantry crane above the stones, it could essentially be just a few motors, a claw, and a switch to detect the presence or absence of a stone. I also just realized that the state transition function would not be an unimaginable monster with the possibility to hide something in there. You do not need much code to simulate a neural network regardless of its size and it would probably not grow that much when encoded for a Turing machine.
Now which part of the stones and the crane feels pain and anger if a loved one dies? And we are not looking for some stones signaling certain muscle activity or the production of tears, we are looking for the internal experience of pain. Based on my believes I seem to be forced to accept that those stones can somehow be conscious and feel emotions even if it seems hopeless to understand how this works. But this also has a possibly even more disturbing consequence. If piles of stones can be conscious, what prevents other objects from that? What about stars in galaxies? What does it feel like to be a galaxy?
Software on the other hand... that is a bit harder to categorically dismiss. I think I can imagine software that produces an experience somewhat analogous to the human one.
I can not, no matter how hard I try. I can imagine a software faking human experiences, to say it feels joy or pain, I can not imagine it to actually feel it. Not at last because I can not even really say what the difference is. It seems to me that once I could imagine this for a software it would only be a small step to imagine the same for a pile of stones. The difference between a human and some software seems enormously larger than the difference between some software and a pile of stones, at least to me.
How such systems can have a feelings? I don't know. Probably the same way as a network of chemically/electrically activated neurons.
Nagel is also making some pretty big claims, specifically about the "private" nature of experience. Anscombe (whom I also loved reading) makes similar arguments. Philosophy of Mind was never my forte (I'm a logic and ethics guy), but reading Nagel was always a breath of fresh air in what I think is a subfield marred by unnecessary technicality and equivocation.
But then it's confusing that bats in my region are being killed off by white nose syndrome. Fungus on their nose wakes them from hibernation and causes them such stress it eventually kills them.
Which actually does make sense since the theory is bats do so well immunity-wise because of the large swing in their body temperature range. A bat's temperature drops very low when hibernating then when they fly it shoots up very high.
And if science is our best attempt at creating an objective account of the world, how do we include the subjectivity of creatures different than us in that account?
If we can't, then science is incomplete, because the world isn't entirely objective. Also, if the objective (shape, extension, number, etc) is created by abstracting away from the subjective (color, smell, feel, etc), then you can't use the objective to explain the subjective, although you can correlate the two (certain brain processes result in certain subjective experiences).
Dennett is another strong opponent here. He says Mary would retort, aha but you have a limited imagination. I know everything about color, so nothing about that experience was new to me in any non trivial way whatsoever. Because Mary knows EVERYTHING about color. It's may be impossible for us to imagine someone being omniscient about color, and that spoils the whole argument.
Similarly, what if we knew every physical fact about bats. Dennett only needs to say, we have small imaginations. We have trouble imagining that what it is like to be a bat would be the sum of all physical facts about a bat. But we can't be sure of that. There is no logical certainty. Hence Nagel proved nothing.
Yes you end up saying "we're all robots/zombies/unconscious animals" this way, and there's actually nothing special about being human it's just a myth we tell ourselves. But aside from some people not finding that conclusion tasteful, there's nothing wrong with it.
Footnote 8, page 9:
> 8 It may be easier than I suppose to transcend inter-species barriers with the
aid of the imagination. For example, blind people are able to detect objects
near them by a form of sonar, using vocal clicks or taps of a cane. Perhaps if
one knew what that was like, one could by extension imagine roughly what it
was like to possess the much more refined sonar of a bat. The distance between
oneself and other persons and other species can fall anywhere on a continuum.
Even for other persons the understanding of what it is like to be them is only
partial, and when one moves to species very different from oneself, a lesser
degree of partial understanding may still be available. The imagination is
remarkably flexible. My point, however, is not that we cannot know what it is
like to be a bat. I am not raising that epistemological problem. My point is
rather that even to form a conception of what it is like to be a bat (and a fortiori
to know what it is like to be a bat) one must take up the bat's point of view.
If one can take it up roughly, or partially, then one's conception will also be
rough or partial. Or so it seems in our present state of understanding.
Of course for a bat the list of possible sensations, actions and rewards is different than for men or women. But what doesn't change is that they are all agents acting by reinforcement learning, trying to survive.
I'm wondering why philosophy doesn't take this stance. Is it because it sounds too similar to behaviorism? Does it seem reductionist? Instead of using the parsimonious concept of reinforcement learning agent, they use hard to define words such as consciousness and self. Instead of looking at what matters - reward maximization, survival - they analyze qualia and Chinese Rooms.
Philosophers, why are you ignoring recent AI research? Isn't it a waste of time to use such intuitive concepts as consciousness, free will and self? If only you could have come to a definition of consciousness you agree upon, but you can't, because it's a reification, a suitcase concept.
Whatever you decide about his beliefs, Nagel is a wonderful writer. Pick up a copy of Mortal Questions if you get a chance.
Always be yourself, unless you can be Batman. Then always be Batman!
... just kidding. Interesting read anyways.