Hacker News new | past | comments | ask | show | jobs | submit login
What Is It Like to Be a Bat? (1974) [pdf] (warwick.ac.uk)
132 points by miobrien on Mar 30, 2017 | hide | past | favorite | 95 comments



Not sure why this ended up on HN today, but this is a classic text read in many introductory courses focused on philosophy of mind and consciousness.

I quite like the paper. I once wrote a follow-up paper about how one might interpret it now that we know about neuroplasticity. That was fun to ponder.

I looked at a specific experiment where researchers managed to map visual impulses from a camera to an electrode "pixel grid" physically placed on the tongue, such that the visual inputs were "mapped" to corresponding locations on your tongue. It was shown, in the research, that people could catch balls thrown at them blindfolded using this setup, and after a relatively short period of sensory adjustment. They would actually "see with their tongue".

You can read about that research in this 2003 article from Discover Magazine:

http://discovermagazine.com/2003/jun/feattongue

The Nagel question then, is the fact that I experience vision a certain way really an essential aspect of "what it's like to be me?" Or is my consciousness a "further fact"... something running in the background of all my senses, no matter how they are wired (or rewired) over time? This is sometimes referred to as the "hard problem" of consciousness, summarized nicely in Wikipedia here:

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness


I think that's an interesting offshoot of the discussion, but I'm not sure that it really affects the argument that much.

Let's say you undergo a process to slowly use your brain's plasticity to get closer and closer to perceiving as a bat. Well, does that help me? Am I able to imagine what it is like to be a bat? Of course not. Well, what if you could talk to me and describe it, then would I have a true understanding of the experience of being a bat? Still no!

Nagel's point isn't about whether we could do anything to make ourselves into bats, it's rather that, when we don't have that qualitative experience, we cannot truly understand what it is like to have that experience.

If anything, the argument works even better now that you've gone and given yourself sonar: you can talk to me, unlike that dumb bat, so you can describe your experiences to me, and yet I will still have no true understanding of the experience itself.

That is (a part of) the Hard Problem of consciousness: that my studying and examining and learning about your conscious experience still does not allow me to understand it.


If anything, the argument works even better now that you've gone and given yourself sonar: you can talk to me, unlike that dumb bat, so you can describe your experiences to me, and yet I will still have no true understanding of the experience itself.

This underlines the suggestion at the very end, we are lacking tools to properly talk about experiences and maybe we could actually develop them. My inability to describe to you, or probably even better to a blind person, what it is like to see a red object may after all not be fundamental. Maybe there is a way to communicate and in turn truly understand what using the tongue device is like, we just have not yet discovered it.


One of the more interesting proposals I was faced with as part of my Cog Sci & AI undergrad was that we do indeed have ways of communicating that subjective/experiential knowledge — knowledge which is otherwise ineffable. Art is one obvious example of this. At its best, art is able to communicate things that a purely propositional language cannot. The same could be said of rituals — participation in a ritual is a formalized way of forcing you to take on specific states of being that convey a kind of knowledge that can only be acquired by embodying it.

This might all sound a little floofy, but it's really not. Think about the deep knowledge conveyed through game simulations. I've never been to space, but thanks to hours and hours of playing Kerbal Space Program, I really do have some sense of what it's like to navigate through space — a knowledge that goes far deeper than if I had only learned all of Kepler's and Newton's laws and equations.


There may be some relationship between that and subjective experience, but it's not really the same thing. You're describing essentially "know how" instead of "know that". But here we're talking about something that doesn't actually seem to be representable in terms of physical

Why is it that there's something it's like to be zukzuk? zukzuk could simply be like a computer program, simply responding to inputs. Or you could simply have an erroneous memory of actually experiencing things (instead, you simply just responded to inputs, one of which was to record an apparently subjective experience). That doesn't seem to solve the problem to me; it's just kicking the can down the road a little.


The distinction between "know how" (aka procedural knowledge) and "know that" (declarative knowledge) is a favourite in cognitive science, but I think it is a bit shallow. I guess the point I'm making here is that "knowing how" is a major part of knowing "what it is like" to be something.

For example, I know how to ride a bike, and that gives me some idea of _what it is like_ to be someone who knows how to ride a bike — an insight I certainly wouldn't have if I only understood the mechanics of bike riding in declarative terms. The fact that I will probably never know what it is like to be a bat has a lot to do with my inability to have the know-how that a bat has (it's very hard to know how to fly and echolocate when you don't have wings nor the appropriate sensory organs). However, I'd argue that an active, deep VR simulation of bat flight might bring you just a little bit closer to having that embodied, subjective knowledge, in a way that studying bat behaviour or anatomy from a purely objective perspective never would.


What it will be like? Series of sounds, which reprogram brain structure to develop image processing networks?

I can't say it sounds plausible.


Think about explaining general relativity, quantum chromodynamics, or just the way a second level cache works to a random human from the early stone age. Probably not an easy task. I am not sure if this is actually a fair analogy, describing what seeing red is like is in some sense very different from explaining the theory of general relativity. But on the other hand the situations are also somewhat similar, the stone age guy just does not posses the required language, probably to a large extend math, and knowledge and is unable to look back on several thousand years of human history and innovation that shaped us. Sure, it looks like a stretch to us that we will ever be able to meaningfully convey the experience of seeing a red object, but I am not convinced that this is a given. Intuitively I certainly agree with you, not going to happen, but maybe that is just a lack of imagination.


As a kind of brute-force proof-of-concept: it could be a series of instructions to poke your brain with an electrified needle of how to invoke the same thoughts.

(It could be that they're different locations in your brain that have to be poked than the locations it would take in the sender's brain, but it's not a fundamental obstacle. Either the sender studies your brain to tell you where you should do it, or instructs you how to figure out the right spots.)


But does it sound square?


I think it's related to Mill's concept of a Competent Judge

What if someone was a bat, turned into a human and retained the memories of being a bat?

http://www.betsymccall.net/edu/philo/higher%20pleasures2.pdf


'''The Nagel question then, is the fact that I experience vision a certain way really an essential aspect of "what it's like to be me?"''' It is an essential aspect of what it is like to be you. (However, there is an ambiguity here about temporal scope. What it is precisely like to be me now is to have precisely the experience I am having now, and what it is like to be me these days is to have the kinds of experiences I have these days, and what it is like to be me over my whole life is to have a similar pattern of experience to me over a whole life.) And it isn't an essential aspect of what it is to be you. (No one would claim that Mary becomes a different person when she sees red for the first time. This is compatible with the claim that, because of the nature of personal identity, it is impossible for a person to undergo certain very radical changes in the nature of their conscious experience.)

'''Or is my consciousness a "further fact"... something running in the background of all my senses, no matter how they are wired (or rewired) over time?''' Your senses are the very faculties that allow the world to world to produce your conscious experience, so it is odd to frame the "hard problem" of consciousness in this way.

In general, neuroplasticity is sort of interesting for the understanding of consciousness, but it seems arguable that in the tongue experiment you are just experiencing the same qualia you have always experienced; it isn't like seeing colors for the first time.


Now I would be interested in the following experiment. Teach someone blind from birth to see with this tongue device, then restore eyesight with a surgery and test if they can recognize shapes. Why? I once read about an experiment where they restored eyesight of a person blind from birth and the person was unable to identify shapes by looking at them. The subject could easily identify the shapes by touching them, knew how a round ball or the corners of a cube felt, but that was obviously not enough to figure out what a corner looks like. I wonder if the tongue device would be sufficient to provide such an understanding.


How long did they wait for the person to learn?


They did this immediately after the surgery, I am pretty sure the person soon learned what different shapes look like. They just wanted to test whether an understanding of shapes by touching them transfers to an understanding of what they look like. I will try to find the experiment in case you are interested in the details.

EDIT: Did not find the specific study I was thinking of but this is known as Molyneux's problem [1] and there are several sources mentioned in the article.

[1] https://plato.stanford.edu/entries/molyneux-problem/


Interestingly, Nagel actually seems to acknowledge/anticipate this point on page 9, see reply to sibling:

https://news.ycombinator.com/item?id=13999907


Cool, thanks for the links.

It ended up on HN because philosophy is what I studied in college/grad school, and I like the fact that HN seems to be a very intellectually omnivorous community -- sans the annoying dilettantism you find elsewhere.


I have a suspicion that future generations of philosophers will be taught Nagel's is like-be problem in the fashion that Hume's is-ought problem [1] is now taught. Although they concern completely different subjects, the similarities between the two are striking:

1. On what grounds can we make statements about what ought to be based on what is?

2. On what grounds can we make statements about what it is to be something based on what it is like to think about the experiences of that thing?

Edit: In case there's any confusion, I called it the is like-be problem to draw a similarity to Hume, not because anybody else calls it that.

[1]: https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem


Yeah, I think this is probably true. Hume was trying to argue that normative statements can't be reduced to or derived from natural statements about how the world is; Nagel's fighting the reduction of consciousness to positive, objective statements about consciousness, because of the existence of phenomenal character. There's a certain way to be a bat that's wrapped up in the very fact of its being a conscious being with a specific biology and neurology, but which is still irreducible to those features.

If you wanna link this up to an older philosopher, this is basically what Kant's on about in his Critique of Pure Reason. Our special human brain stuff is responsible for providing us with the faculties (e.g. understanding of things like causation) we have for understanding the world. We are always trapped in our experience of the world, and though that shouldn't preclude us from experiencing the world, you can't separate how we experience objective reality from the fact that we experience it as humans, with human brains and faculties.


Absolutely, and you've said it better than I did.

As a nit, wasn't Kant's Critique of Pure Reason written in part as a response to Hume?


In CPR here is what Kant says about Hume:

"David Hume, who among all philosophers came closest to this problem, still did not conceive of it anywhere near determinately enough and in its universality ... believing himself to have brought out that such an a priori proposition is entirely impossible, and according to his inferences everything that we call metaphysics would come down to a mere delusion of an alleged insight of reason into that which has in fact merely been borrowed from experience and from habit has taken on the appearance of necessity."

The "problem" Kant is referring to is how metaphysics can be possible in spite of what Hume proved. Kant says in the same breath, that this is the central problem of the critique - so i guess you can nearly say that it's written almost entirely as a response to Hume.


ooooh totally was. He was deeply affected by Hume's attack on causation, so tried to come up with a coherent response. Ended up positing a whole bunch of wacky and (I think) true things about our embedded cognition, so I think it's probably fair to say that Critique of Pure Reason is the result of his beef with Hume.


In the CPR, Kant proposes a synthesis of both rationalism and empiricism: although some knowledge comes from sensory experience, there are some things that we just know -- e.g., pure mathematics. So I'd say it's the result of his beef with everyone!


Doesn't this all go back to Plato, where Plato thought knowledge depended on the forms, because the flux of particulars cannot provide knowledge?


That's a great point about the similarity of the two problems. If we didn't already know from intimate personal experience that there is such a thing as consciousness, no amount of physical observation could ever be sufficient to prove to us that it exists. Similarly, if we weren't well prepared (on other grounds entirely) to assert that certain things ought or ought not to be done, no amount of observing the actual state of things could be sufficient to show that there is actually good or evil in the world.


The following two sentences, from the second page of the paper, are where he really nails it:

"We may call this the subjective character of experience. It is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence."

All of today's science is logically compatible with the absence of subjective experience, because science cannot even define what subjective experience is.

Every "definition" of subjective experience (SE, in what follows), is actually an ostension: the reader's attention is directed to his own SE. For example:

  - Perhaps nothing actually exists -- except SE.  That's the one thing that MUST exist. [Descartes]

  - "There is such a thing, as what it's like to be..." [Nagel, in the posted paper]

  - There's such a thing, as what seeing red is like: a quale.

  - Science can't rule out that a behaviorally perfect duplicate of you, could lack SE (i.e. be a zombie).
    Science can't rule it out, but maybe SE does: why would a zombie vociferously assert the existence of SE?
    Those who claim SE is an illusion, i.e. that you actually are a zombie but don't know it,
    have the burden of explaining why zombies claim to have SE.
All of these "definitions" require an audience that has SE and is able to recognize it. A scientific definition would not.


>All of these "definitions" require an audience that has SE and is able to recognize it. A scientific definition would not.

Not entirely true. Science is made up of language (syntax, meaning etc) and is also observer-relative.


Well, that's precisely intrinsic to the problem of being subjective. You cannot objectively describe subjective experience, because otherwise it would cease to be subjective and become objective. It is reasonably probable that subjective experience will be outside of science for ever — unless we obtain so great a knowledge of the human brain that we can actually see some kind of physical change happening without a physical cause.


>All of today's science is logically compatible with the absence of subjective experience, because science cannot even define what subjective experience is.

That just means science is incomplete and we are ignorant, not that subjective experience is metaphysically different from all other observed phenomena.


It's compatible with both propositions. At the moment we literally have good grounds for making either conclusion, although we have plenty of weak grounds in both directions.


when you say that are you making an empirical claim or a metaphysical one?


What I like about Thomas Nagel is that he is one of the few atheists who steps up against darwinism and materialism as an explanation for almost everything.

Quoting from The Last Word (1997):

"[...] I don't want here to be a God; I don't want the universe to be like that. My guess is that this cosmic authority problem is not a rare condition and that it is responsible for much of the scientism and reductionism of our time. One of the tendencies it supports in the ludicrous overuse of evolutionary biology to explain everything about life, including everything about the human mind."

Of course his book Mind & Cosmos is the best example of this


I agree that darwinism is overused to explain everything but materialism seems inescapable -- especially for atheists. I think even Nagel agrees with "Inference to the Best Explanation" and so far I've never seen a single case of anything better explained than the materialist explanation.


Materialism is not inescapable for atheists in philosophy at all. You can be anti-realist or an idealist sans God. You can also be a platonist. Or you could be a dualist like Chalmers. There's plenty of room for different views than materialism without committing to a belief in the supernatural.


I guess I meant physicalism then. Or maybe naturalism? I didn't mean to make a claim on the philosophy of mind just that the world is simply the physical world.

On the other hand dualism refitted for the physicalist world seems like a hack at best. Panpsychism fails the inference to the best explanation or Occam's razor etc.


It depends on whether physicalism can account for all aspects of mind along with abstract concepts and the laws of nature (or causation). To be a successful monism, one has to be able to explain everything in terms of that monism.


Yes, and many have done so. Marvin Minsky, Steven Pinker, and Daniel Dennet come to mind. In philosophy lingo I think it all ends up being functional physicalism.


There's no consensus that Minsky, Pinker, Dennett have succeeded. Plenty of philosophers and scientists have disagreed with their arguments. Dennett's attempt rests on the controversial claim that we're actually p-zombies, and consciousness can be understood in only functional terms. This is prima facie absurd to many people.


Dennet's argument is simply an inversion of the causal chain: i.e mind is not the causer but the cause. "P-zombies" is kind of a non-starter when you think about it. I think LessWrong's essays on it shows that, here is one passage:

"you can form a propositional belief that "Consciousness is without effect", and not see any contradiction at first, if you don't realize that talking about consciousness is an effect of being conscious. But once you see the connection from the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how philosophers write papers about consciousness, zombie-ism stops being intuitive and starts requiring you to postulate strange things."

http://lesswrong.com/lw/p7/zombies_zombies/


Could you please elaborate in detail on the ways in which you feel that panpsychism fails?


I can't do it in detail, it'll take hours. From a simplicity point-of-view (Occam's razor) it adds an extraordinary claim while falling short on any evidence. Compare it to a physicalist account like Minsky, Hofstader, or Dennet where they don't require anything other than what we already know about the world.

For further reading consider LessWrong's Zombies sequence which tackles Chalmer's arguments and property dualism in general https://wiki.lesswrong.com/wiki/Zombies_(sequence)


If you could boil it down into a few key sentences, what would they be?

From my perspective WRT occam's razor, it looks quite different. If panpsychism is right, there is only one kind of matter. For panpsychism to be wrong, there has to be two different kinds of matter (conscious and unconscious), along with a mechanism to transition between the two states that is mediated by a physical process. Furthermore, despite the transition being mediated by a physical process, the difference between these two states is something undetectable to current science.

Beyond that, if you assume consciousness is the result of some physical characteristic of brains (which materialists usually do), you have to realize that most of the biochemistry going on in brains is going on all over the body. Do my organs and other body parts have their own consciousness? Furthermore, our experience of consciousness doesn't even extend to the entire brain. What is different between non-conscious parts of the brain and conscious parts? Physically the neurons themselves are basically identical. If it is the way that they're connected, those connections just change the simple physical forces acting on the particles of the brain. You again have to answer the question of how simple physical forces convert matter from unconscious to conscious.


>For panpsychism to be wrong, there has to be two different kinds of matter (conscious and unconscious)

I think you have it mixed up. Panpsychism is the dualist belief, although it differs than supernatural dualism it's still dualism. It says that matter is singular but had two properties the mental and the physical.

Whereas materialism is monist all the way through. You raise good questions and science and philosophy are still working on it but it doesn't require any extra properties or matter and agrees with scientific evidence therefore simpler.


Conscious experiences are correlated with - but irreducible to - physical phenomenon. You can say that red "is" light with a wavelength of ~700nm, but that is insufficient to communicate the experience of red to someone born blind. That is really the more important aspect of the hard problem in my opinion.

Let's assume that experience is irreducible, and you accept your own consciousness (which I hope you do!). That seems to leave a few possibilities:

1.) All normal matter is conscious 2.) Some normal matter is conscious - and we're back to the problems I previously listed 3.) Consciousness is matter/energy of a kind physics has completely missed somehow

From my perspective, #1 is the simplest, #2 is complicated, and #3 seems implausible given the fact that we've so precisely probed reality down to the sub-atomic scale.

Also, we should be careful about being overly reliant on scientific evidence. The only evidence for the existence of consciousness is your own subjective experience of it. There is no objective evidence that consciousness exists. It would seem to me that we should at least be able to prove the existence of consciousness before we make the claim that scientific reductionism can explain it.


"Liquidity is irreducible to physical phenomonon. If we break it down to atoms and look at each atom we will see that it's not liquid. Therefore we must conclude that there are two types of matter in the world, liquid matter and non-liquid matter."

Or take for example the fact that a few hundred years they couldn't figure out fire so they made up a new property (not too unlike pansychism):

"In general, substances that burned in air were said to be rich in phlogiston; the fact that combustion soon ceased in an enclosed space was taken as clear-cut evidence that air had the capacity to absorb only a finite amount of phlogiston. When air had become completely phlogisticated it would no longer serve to support combustion of any material, nor would a metal heated in it yield a calx; nor could phlogisticated air support life. Breathing was thought to take phlogiston out of the body"

The points I'm trying to make:

1. Emergent properties are real. Just saying that reducing something makes it hard to explain doesn't make that thing a magical thing.

2. History shows that it's kind of silly to invent new type of matter or properties when we fail to explain something.

We have this black box that we barely know much about inside our skulls. It could be an amazing machine that generates consciousness yet before trying to reverse engineer it some of us likes to say "it's actually not that thing, it's something else that has the magic".


How would you distinguish between "dualism" and "the supernatural"?


Just say that the mind is part of the natural world, which means the world isn't simply material. It's both mental and material.

Chalmers position is property dualism where certain informationally rich systems have conscious properties in addition to physical ones.

Anyway, materialism is a monism, like Thales saying the world is made of water. There's nothing that necessitates the world be a monism. Maybe it is and maybe it isn't.

Panpsychism is another dualistic view of the natural world than requires no supernatural component. Everything has a mental as well as material aspect.


So you distinguish between dualism and the supernatural by expanding your definition of "natural" to include a distinct realm of non-material "nature" with no definable rules.

And, of all the phenomena that might be identifiable as emerging from the fabric of reality, the one that distinguishes you (and other systems you recognize as sufficiently like you) from everything else just happens to be the one you've decided must objectively be set apart.

I'll ask again: in this framework, what remains "supernatural", and why can you reject it?


Who says the mental has no rules?

Anyway, the question of what fundamentally exists is an ontological question. Materialism is one possible answer. All that fundamentally exists is material objects, and everything else, including mind, society, abstract concepts, etc emerge from that material substrate.

Idealism would be the opposite answer. Everything is mental, and material objects are just ideas presented to the mind.

Everything is mathematical would be another possibility. Or information, perhaps qubits (it from quantum bits).

But maybe aspects of mind and matter are both fundamental to what exists. Or maybe it's some third neutral substance that's neither.

But whatever the case, none of that need involve the supernatural. It can, if one is disposed to think that way. One can say that God created the material world (granted, that leaves God as the something else, but at least the world gets to be entirely material).


I'm just going to come out and say it.

There is no good reason to believe that any ontology you can come up with to help you describe and attempt to comprehend reality, on the time and energy scales you live on, has any basis in the fundamental nature of reality.

The only distinction between atoms and phlogistons, physiology and the theory of humors, is their explanatory and predictive power, but none of those is any more "real" than the others beyond how they help us symbolize the world in a way we can reason about.

There is no good reason to believe that consciousness is any different, other than unexamined chauvinism, or, at best, wishful thinking.

Reality doesn't give a fuck what circles you draw around some fuzzy densities in an energy field, and what labels you write on them. Cars and sandwiches, atoms and quarks, where you end and the air around you begins, these distinctions that only matter to "you". This is the tyranny of the dichotomous mind.

The amount of unjustifiable, and, on the scale of the universe, unearned arrogance that it takes to assert that you or anything about you is set apart from the rest of material reality, is staggering, and just not something that I'm capable of sharing, having seriously considered the alternative.

Honestly, I'm just not sure we can have this conversation unless you can assure me that you have fully considered this to the point of having to pull yourself out of a serious existential crisis at some point in your life.

Any platitudes that come down to "there must be something that objectively privileges the nature of our existence above that of a rock or an endless field of eternal cold emptiness" are just totally baseless, and fail every test that might distinguish it from "god has a plan for me", "love is a physical force that directly manipulates time and space", or "it's turtles all the way down".


Doesn't materialism fall into "any ontology" that you're ranting against? That's fine, skeptics about the nature of external reality have been around since the ancient Greeks.

But then to circle back around and privilege materialism as being somehow distinct from all the other ontologies humans have argued for? Are you skeptic or are you dogmatic materialist?

Pick your side. Just so you know, equating every other ontology as the equivalent of "God having a plan for my life" is piss poor philosophy, and just ignorant.


An ontology is an attempt to define any divisions beyond "all of that which is". Whatever I "am", philosophically, is whatever label describes somebody who does not presume to make any such distinctions.

Or, rather, always does that, like all of us do all the time, because that's what permits us to continue to live and think in the conditions and on the scale we find ourselves inhabiting, because that's the basis on which our minds work. But does not attempt to claim that any of the distinctions that happen to be useful to me have any fundamental basis in the raw nature of reality.

"Materialism" of this type is the default position in the same way atheism is the default position. The burden of proof is on the claim that there's something beyond material reality, and both dualism and theism are making this claim. It is the same claim. The only difference is the details of exactly what the supernatural thing is.

That is something there is not and cannot possibly be any evidence for, beyond "just take my word for it". In that way, dualism is indistinguishable from the existence of god or gods or magic or reincarnation or any other supernatural thing. To permit the possibility of one is to permit the possibility of the other. And there is no more evidence of one than the other, nor can there be. Again, there is no reason besides chauvinism or wishful thinking to believe in a theory of reality that makes you or something about you special.

I'm not going to extend the slightest benefit of the doubt to any theory of reality which says "I, and people/things/systems I recognize as like me, am intrinsically special and set permanently apart because something about me is fundamentally different from everything else", which dualism always comes down to. Any more than I'd be inclined to trust the objective truth of any theory of politics or economics or anthropology which says "I, and people/things/systems I recognize as like me, am intrinsically special and set permanently apart because something about me is fundamentally different from everything else".

----

EDIT: I am definitely, full-heartedly, whatever Daniel Dennett is.


Also, redefining "nature" to include non-physical things does not get you around the problem. We run into a language limitation, when you attempt to define away this fundamentally impossible premise. That is what I mean when I say "what is supernatural in a reality where you're already permitting things that do not have a material basis".

What exactly is the distinction you're trying to make between "non-physical" and "supernatural"?

What basis is left to make that distinction on, once you've abandoned the material realm?

What distinguishes your "plane of consciousness" from "heaven" or "the astral projection spirit crystal healing field" or "the power of positive feelings"?


I don't think consciousness is a distinguishing characteristic. We are all just highly complex chemical chain reactions occurring in the context of a changing environment. Our chain reactions are sufficiently complex that they're capable of communicating information to other chain reactions, and we happen to communicate about our experience as a chain reaction. You can't infer from the absence of communication that other chemical systems don't have experience.

I do agree that self-awareness and intelligence exist along a spectrum, along which we are apparently the farthest point.


Unless I'm missing something, you've just described "materialism", which is exactly what I am arguing for.

I'm not saying that we are the only "conscious" systems, or that "consciousness" is something set apart from the rest of reality. I'm saying the exact opposite of that.

Your point (at least, as I'm reading it) is exactly what I was getting at with "and other systems you recognize as sufficiently like you", which I did not intend to be chemical-biology-centric.

What I am arguing against is "dualism", and its many hidden forms and names (including the one goatlover is espousing). Which is the idea that "consciousness" is some fundamental property of nature, set apart from (really: privileged above) the rest, that exists on some plane beyond material reality.

Which, I argue, there is no good reason, besides pure unexamined chauvinism, or wishful thinking at best, to believe

This is what you are also saying, no?


I think it is logical to conclude that we are not "blessed" matter, and there really isn't a good basis for excluding non-biological matter from the property of having an internal experience. I wouldn't call myself a materialist though, because I don't know that mental states are caused by physical states. It is just as possible that the arrow of causality goes the other way - evidence only demonstrates correlation. Additionally, while I'm skeptical of the possibility of matter without consciousness, I'm willing to consider the possibility that consciousness can exist in a state that doesn't correspond to what we currently consider "matter"

If I had to label myself I would say I'm a monist, in that I don't think mind and matter are separable or independent. I lean towards idealism in the sense that I suspect the set of possible states of consciousness is a superset of the set of states of matter that are possible according to the current "laws" of physics. Don't read too much into those labels though, that's only a first order approximation of my view.


Atheism may be defined as materialism, but more precisely it means the belief or knowledge of the non-existence of a creator god or at least a deity overlord.

Most Buddhists are atheists but not materialists. And although materialism and atheism are commonly conflated, they are very different issues, particularly considering non-abrahamic religions or philosophy in the technical sense.


I don't like to argue about definitions so let me ask you this: Don't you agree that the majority of self-proclaimed athiests are non-religionists and also reject the supernatural?


Donald Davidson's supervenience views occupy the in between space you describe https://plato.stanford.edu/entries/supervenience/


I think it's rather refreshing to have someone speak out against scientism, thank you for the book.


I went to grad school to study the neuroscientific basis of consciousness, and this paper is a classic. It inspired one of the weirder conference sessions I've ever been to at ASSC, called "Is a fish conscious?"

Imagine a room full of philosophers and scientists arguing vociferously to determine where (or even if) to draw the line between conscious/unconscious organisms. (E.g., fish=probably, bacteria, probably not?)


A large part of the problem is that people don't all agree on the definition of consciousness. It ranges anywhere from the property of having an internal experience (which I subscribe to) all the way to self awareness, and even to the ability for rational thought.

This - and arguments over the definition of "is" - were the reason I stopped reading philosophy of mind papers.


Even having that argument in the first place is basically crypto-dualism.

The idea that there is objectively such a distinction is 100% reducible to any other attempt to assert that some definitional dichotomy "actually exists". It's exactly isomorphic to any other argument about taste. What is a sandwich, what is a car. An argument about whether a fetish is hot, between a person who has it and a person who doesn't. It's all the same damn thing.

Shit like that is why I hadn't stop taking philosophy courses in university. It was just rehashing what always came down to this exact same essentially platonist theory of forms bullshit, over and over and over.


TLDR: please read every word, because this work is that brilliant and that important.


This is a great essay, and I would highly recommend this as a counterpoint to Nagel in terms of explaining consciousness ("qualia") if you want to get the most out of Nagel's argument: http://www.newyorker.com/magazine/2017/03/27/daniel-dennetts...

If Nagel things materialists can't explain consciousness Dennett thinks they can. E.g.

"The obvious answer to the question of whether animals have selves is that they sort of have them. [Dennett] loves the phrase 'sort of.' Picture the brain, he often says, as a collection of subsystems that 'sort of' know, think, decide, and feel. These layers build up, incrementally, to the real thing. Animals have fewer mental layers than people—in particular, they lack language, which Dennett believes endows human mental life with its complexity and texture—but this doesn’t make them zombies. It just means that they 'sort of' have consciousness, as measured by human standards." Joshua Rothman, New Yorker, MARCH 27, 2017 - http://www.newyorker.com/magazine/2017/03/27/daniel-dennetts...

More detailed counterargument by Dennett: https://www.amazon.com/DARWINS-DANGEROUS-IDEA-EVOLUTION-MEAN...


If Nagel things materialists can't explain consciousness [...]

I do not think he makes this statement. He seems perfectly open to the possibility of explaining subjective experiences in physical terms but he is convinced that we are at least very far away from being able to do it. In consequence he obviously considers all current attempts flawed and lacking.


I appreciate the pushback. Here is what Nagel says in his essay (emphasis mine). I wonder if you still disagree?

"It is impossible to exclude the phenomenological features of experience from a reduction in the same way that one excludes the phenomenal features of an ordinary substance from a physical or chemical reduction of it--namely, by explaining them as effects on the minds of human observers. If physicalism is to be defended, the phenomenological features must themselves be given a physical account. But when we examine their subjective character it seems that such a result is impossible. The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view."


Let me counter with this quote.

If we acknowledge that a physical theory of mind must account for the subjective character of experience, we must admit that no presently available conception gives us a clue how this could be done. The problem is unique. If mental processes are indeed physical processes, then there is something it is like, intrinsically, to undergo certain physical processes. What it is for such a thing to be the case remains a mystery.

I think the critical point in your quote is the last sentence.

The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view.

Sure, my experience of looking at a red object is fundamentally my experience, but there seems to be no obvious reason why we could not abstract me away and talk about the experience of an arbitrary human seeing a red object. This is also in line with the suggestions at the very end, trying to develop the tools to talk about experiences in an objective manner.


I don't think I fully understand what Nagel means by developing objective tools to describe experiences, I should re-read the full article a few more times. I think, though, that you're exactly right: We _do_ have a shared understanding and experience of what its like to see a red ball, and we can talk about it in the abstract. My reading of that is that because we do have a shared human experience there can and indeed must be a physical account (Nagel's term) of it. Nagel's argument brings in a wedge of doubt about whether that shared experience itself is real by saying, at least partially, that because we can't explain what it is like to be a bat (bat point of view) in human terms we won't be able explain what consciousness is for humans _or_ bats with a physical account (at all or at least right now). Dennett, I think, says that that is more or less backward, and that all the means is there are different kinds of consciousness. In the article I mention he says,

"The big mistake we’re making,” [Dennett] said, “is taking our congenial, shared understanding of what it’s like to be us, which we learn from novels and plays and talking to each other, and then applying it back down the animal kingdom. Wittgenstein”—he deepened his voice—“famously wrote, ‘If a lion could talk, we couldn’t understand him.’ But no! If a lion could talk, we’d understand him just fine. He just wouldn’t help us understand anything about lions.”

“Because he wouldn’t be a lion,” another researcher said.

“Right,” Dennett replied. “He would be so different from regular lions that he wouldn’t tell us what it’s like to be a lion. I think we should just get used to the fact that the human concepts we apply so comfortably in our everyday lives apply only sort of to animals.” He concluded, “The notorious zombie problem is just a philosopher’s fantasy. It’s not anything that we have to take seriously.”

I found that convincing, I'm curious if you do as well?


I agree insofar as that being a lion or a bat is probably really different from being a human. It is a pity that I can not remember what it felt like when I was only one or two years old, that might provide a glimpse at the difference. In all the the time I can remember I was not fundamentally different from now, or at least that is how I remember it. I would really like to know how it was at a very young age, not speaking and understanding a language, not recognizing me in a mirror, maybe not even being aware of my existence.

Anyhow. I did not read the entire Dennett article when it was posted here a few days ago, maybe I should, but it was just not compelling to me [1], at least as far as I got. What I got from the part I read is that he seems to do exactly what Nagel warns of, dismissing the experience of being a human. I find the comparison with a computer much more interesting than the comparison with animals. What if we build an artificial neural network resembling a human brain? If that is not good enough, what if we perform a molecular simulation of a brain? Or even a quantum physical simulation of a brain if molecules are still not good enough, but personally I doubt that.

But what if? Does this artificial brain experience what it is like to be a human? As a pysicalist I think the answer is yes. But just as Nagel says, I have no idea how this could possibly work, how the transistors in my computer could go from controlling the flow of electrons by mindlessly following physical laws to being aware of their existence in a universe, seeing red, feeling joy and pain. What if I replaced the computer with a mechanical one made out of billions and billions of cogwheels? With stones on a beach simulating a Turing machine? With a gigantic printed look-up table mapping all possible inputs to their outputs?

I can not think of any good reason why the stones on the beach - together with someone or something moving them around to perform the computation - should be any less conscious than the human brain they are simulating. And this seems of course absurd. Thinking about this is what gets me the closest to becoming a dualist or something like that. There seems to be not even the tiniest bit of hope at the horizon to even be able to attack this problem from a physicalist perspective. So when Dennett says that there is no problem, assuming he actually says this, then I must disagree.

[1] I had prior exposure to Dennett and, as far as I remember, quite liked what he had to say but somehow not this time. Maybe the topic was a different one, maybe it is just the way the article is written, maybe I should just read the entire thing.

P.S. I just did some more reading on Nagel, it seems you are at least more correct than me. He seems not as open to a physicalist account of consciousness as I thought but the details are hard to tell without actually reading more of his works.


The New Yorker article on Dennett is actually not up to the New Yorker's usual standards, but it gets at a few key points and punchlines. I linked to one of Dennett's books that I think every physicist would enjoy in my first comment.

I think two inverse arguments to the one you mention are more appealing - about getting close to a human brain with neural networks but not quite being there. First, and the New Yorker article actually mentions this at the end, but if you had a damaged human brain we could all clearly see that you are still human and conscious, just not exactly the way someone with a normal healthy brain is. Second, and this gets to not over-reducing the physical aspect of consciousness, say a Nobel prize winning physicist claims they have located physically in the brain where consciousness lives (I think this actually happened) you could quickly ask her, "so, if you take that puddle of neurons and other material where consciousness lives out of your brain and put it in a jar would you say "you" are now sitting in that jar experiencing what its like to be in a jar?" (also absurd).

As for rocks, that does sound absurd! :)

Really appreciate your thoughts.


> I can not think of any good reason why the stones on the beach - together with someone or something moving them around to perform the computation - should be any less conscious than the human brain they are simulating. And this seems of course absurd.

It will take around 3,000,000,000,000 years and enormous beach to simulate one second of brain activity. It is literally unimaginable. What do you imagine when you talk about absurdity? Is it some small scale model, which is laid bare before your mind's eye in all its simplicity, leaving no place for consciousness to hide?


I certainly see a beach with a few hundred or so stones when I close my eyes but I don't think that matters. It is simply the idea that stones on a beach - a lot of stones on a very long beach in a very specific arrangement relentless reordered by Tom Hanks for billions of billions of years according to a very long list of rules overseen by Wilson - could really feel joy and pain that seems absurd. I mean this is a common argument that it is just the sheer scale that would be required and that we are unable to imagine that leads us astray but in the end it just seems wrong that stones on a beach can feel pain, at least to me. But if you think about a computer simulating a brain at the level of neurons, something that is somewhat in reach, does this make it any easier? Does it sound so much less absurd that a data center packed with GPUs could really feel pain?


Stones don't communicate among themselves, and only change state under the entirely external effort of the entity that moves them. I would argue that the stones would need some mechanism for modifying their own state to even have a slim chance at consciousness. Seeing as stones are inatimate objects that can't possibly operate any mechanism, I think the idea is dead in the water.

Another way of looking at it: the significance of any particular arrangement (or sequence of arrangements) of the stones is only meaningful in the mind of the entity that is moving them around. Or perhaps any nearby viewers with the patience and far-fetched ability to make sense of the iterations of stone arrangements. The internal/external distinction between the stones themselves and the stone movers/viewers seems critical to me.

Software on the other hand... that is a bit harder to categorically dismiss. I think I can imagine software that produces an experience somewhat analogous to the human one.


You need of course something moving the stones and interpreting the arrangement but there is no need to bother Tom Hanks with that. Just throw in a Roomba pushing the stones around according to the state transition function. Make it a real quick one so that something meaningful can happen before the sun dies. And for good measure throw in a simple humanoid robot with sensors and actuators from wich the Roomba receives inputs to the computation and to which it sends control signals decoded from the arrangement of the stones.

Now that is not just a pile of stones, but nothing of the added things seems to add much complexity. A robot pushing stones according to predetermined rules can be very simple. Even simpler than a Roomba would be a gantry crane above the stones, it could essentially be just a few motors, a claw, and a switch to detect the presence or absence of a stone. I also just realized that the state transition function would not be an unimaginable monster with the possibility to hide something in there. You do not need much code to simulate a neural network regardless of its size and it would probably not grow that much when encoded for a Turing machine.

Now which part of the stones and the crane feels pain and anger if a loved one dies? And we are not looking for some stones signaling certain muscle activity or the production of tears, we are looking for the internal experience of pain. Based on my believes I seem to be forced to accept that those stones can somehow be conscious and feel emotions even if it seems hopeless to understand how this works. But this also has a possibly even more disturbing consequence. If piles of stones can be conscious, what prevents other objects from that? What about stars in galaxies? What does it feel like to be a galaxy?

Software on the other hand... that is a bit harder to categorically dismiss. I think I can imagine software that produces an experience somewhat analogous to the human one.

I can not, no matter how hard I try. I can imagine a software faking human experiences, to say it feels joy or pain, I can not imagine it to actually feel it. Not at last because I can not even really say what the difference is. It seems to me that once I could imagine this for a software it would only be a small step to imagine the same for a pile of stones. The difference between a human and some software seems enormously larger than the difference between some software and a pile of stones, at least to me.


If you compress all these trillions years back to one second, you'll be able to talk to the simulated person and relate to its feelings, stones will become a blur in the background and stop obscuring the view.

How such systems can have a feelings? I don't know. Probably the same way as a network of chemically/electrically activated neurons.


Dennett claims that he himself is unconscious. Not someone I'd look to for advice on anything, really.


Really? That would be very surprising to me. Where did Dennett say this?


He claims that consciousness is an illusion, ergo: He is unconscious.


Great paper that perfectly exemplifies what a "modern" philosopher does and thinks about on a daily basis. Unfortunately, it's also a paper that is often read only in lower-level philosophy courses. In my opinion, one needs to have a very deep understanding of the Philosophy of Mind corpus before being thrown into this paper because an uninitiated reader may be tempted to think that Nagel is obviously right on X or obviously wrong on Y, but his points are very nuanced.

Nagel is also making some pretty big claims, specifically about the "private" nature of experience. Anscombe (whom I also loved reading) makes similar arguments. Philosophy of Mind was never my forte (I'm a logic and ethics guy), but reading Nagel was always a breath of fresh air in what I think is a subfield marred by unnecessary technicality and equivocation.


I just found out yesterday about how amazing the immune system of bats is they are practically immortal almost nothing makes them sick!

But then it's confusing that bats in my region are being killed off by white nose syndrome. Fungus on their nose wakes them from hibernation and causes them such stress it eventually kills them.

Which actually does make sense since the theory is bats do so well immunity-wise because of the large swing in their body temperature range. A bat's temperature drops very low when hibernating then when they fly it shoots up very high.

http://www.popsci.com/bats-immune-systems-are-totally-unique


It really comes down to objectivity vs subjectivity. What properties are objective features of the world, if any, and which ones are either individual or creature specific?

And if science is our best attempt at creating an objective account of the world, how do we include the subjectivity of creatures different than us in that account?

If we can't, then science is incomplete, because the world isn't entirely objective. Also, if the objective (shape, extension, number, etc) is created by abstracting away from the subjective (color, smell, feel, etc), then you can't use the objective to explain the subjective, although you can correlate the two (certain brain processes result in certain subjective experiences).


In our days we learned how to do seemingly "subjective" feats with objective means (neural nets). It's not such a big, insurmountable divide any more.


You mean the neural nets reported being puzzled by consciousness? Otherwise, I think you're conflating subjectivity with cognitive abilities, which need not be subjective at all.


An interesting development for anyone who wants to take Nagel seriously is Mary and the colorblind room. Short version, a girl grows up in a black and white room (which is impossible in a lot of ways but just imagine) and that she is fed a database that lets her learn everything there is to know about the physical world of color (which is impossible but follow along) and then one day she exits the room and she sees a red rose. Nagel argued there's something about Mary's experience of color that she had in that moment, that she did not learn in the room.

Dennett is another strong opponent here. He says Mary would retort, aha but you have a limited imagination. I know everything about color, so nothing about that experience was new to me in any non trivial way whatsoever. Because Mary knows EVERYTHING about color. It's may be impossible for us to imagine someone being omniscient about color, and that spoils the whole argument.

Similarly, what if we knew every physical fact about bats. Dennett only needs to say, we have small imaginations. We have trouble imagining that what it is like to be a bat would be the sum of all physical facts about a bat. But we can't be sure of that. There is no logical certainty. Hence Nagel proved nothing.

Yes you end up saying "we're all robots/zombies/unconscious animals" this way, and there's actually nothing special about being human it's just a myth we tell ourselves. But aside from some people not finding that conclusion tasteful, there's nothing wrong with it.



Nagel acknowledges this in the article, for what it's worth:

Footnote 8, page 9:

> 8 It may be easier than I suppose to transcend inter-species barriers with the aid of the imagination. For example, blind people are able to detect objects near them by a form of sonar, using vocal clicks or taps of a cane. Perhaps if one knew what that was like, one could by extension imagine roughly what it was like to possess the much more refined sonar of a bat. The distance between oneself and other persons and other species can fall anywhere on a continuum. Even for other persons the understanding of what it is like to be them is only partial, and when one moves to species very different from oneself, a lesser degree of partial understanding may still be available. The imagination is remarkably flexible. My point, however, is not that we cannot know what it is like to be a bat. I am not raising that epistemological problem. My point is rather that even to form a conception of what it is like to be a bat (and a fortiori to know what it is like to be a bat) one must take up the bat's point of view. If one can take it up roughly, or partially, then one's conception will also be rough or partial. Or so it seems in our present state of understanding.


To be a bat, or a woman, or man, means to be an agent that exists as part of an environment, learning to select the best actions that maximize rewards.

Of course for a bat the list of possible sensations, actions and rewards is different than for men or women. But what doesn't change is that they are all agents acting by reinforcement learning, trying to survive.

I'm wondering why philosophy doesn't take this stance. Is it because it sounds too similar to behaviorism? Does it seem reductionist? Instead of using the parsimonious concept of reinforcement learning agent, they use hard to define words such as consciousness and self. Instead of looking at what matters - reward maximization, survival - they analyze qualia and Chinese Rooms.

Philosophers, why are you ignoring recent AI research? Isn't it a waste of time to use such intuitive concepts as consciousness, free will and self? If only you could have come to a definition of consciousness you agree upon, but you can't, because it's a reification, a suitcase concept.


Lots of cognitive science literature models human beings as reinforcement learners, with plenty of good evidence for the hypothesis. The interesting question is: what kind of reinforcement learning?


Whoa, time warp to a previous life...

Whatever you decide about his beliefs, Nagel is a wonderful writer. Pick up a copy of Mortal Questions if you get a chance.


It's interesting that we're getting fairly close (a decade or three?) to being able simulate brains and the we could combine human and bat simulations to be able to check out what it's like.


Thomas Nagel teaches at NYU, right?


Yup.


No need to read the paper. Just ask Batman.

Always be yourself, unless you can be Batman. Then always be Batman!

... just kidding. Interesting read anyways.


somebody show this to Nolan, about time we got another Dark Knight movie.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: