
What Is It Like to Be a Bat? (1974) [pdf] - miobrien
http://www2.warwick.ac.uk/fac/cross_fac/iatl/activities/modules/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf
======
pixelmonkey
Not sure why this ended up on HN today, but this is a classic text read in
many introductory courses focused on philosophy of mind and consciousness.

I quite like the paper. I once wrote a follow-up paper about how one might
interpret it now that we know about neuroplasticity. That was fun to ponder.

I looked at a specific experiment where researchers managed to map visual
impulses from a camera to an electrode "pixel grid" physically placed on the
tongue, such that the visual inputs were "mapped" to corresponding locations
on your tongue. It was shown, in the research, that people could catch balls
thrown at them blindfolded using this setup, and after a relatively short
period of sensory adjustment. They would actually "see with their tongue".

You can read about that research in this 2003 article from Discover Magazine:

[http://discovermagazine.com/2003/jun/feattongue](http://discovermagazine.com/2003/jun/feattongue)

The Nagel question then, is the fact that I experience vision a certain way
really an essential aspect of "what it's like to be me?" Or is my
consciousness a "further fact"... something running in the background of all
my senses, no matter how they are wired (or rewired) over time? This is
sometimes referred to as the "hard problem" of consciousness, summarized
nicely in Wikipedia here:

[https://en.wikipedia.org/wiki/Hard_problem_of_consciousness](https://en.wikipedia.org/wiki/Hard_problem_of_consciousness)

~~~
SamBam
I think that's an interesting offshoot of the discussion, but I'm not sure
that it really affects the argument that much.

Let's say you undergo a process to slowly use your brain's plasticity to get
closer and closer to perceiving as a bat. Well, does that help _me_? Am I able
to imagine what it is like to be a bat? Of course not. Well, what if you could
talk to me and describe it, then would I have a true understanding of the
experience of being a bat? Still no!

Nagel's point isn't about whether we could do anything to make ourselves into
bats, it's rather that, when we _don 't_ have that qualitative experience, we
cannot truly understand what it is like to have that experience.

If anything, the argument works even better now that you've gone and given
yourself sonar: you can talk to me, unlike that dumb bat, so you can describe
your experiences to me, _and yet_ I will still have no true understanding of
the experience itself.

That is (a part of) the Hard Problem of consciousness: that my studying and
examining and learning about your conscious experience still does not allow me
to understand it.

~~~
danbruc
_If anything, the argument works even better now that you 've gone and given
yourself sonar: you can talk to me, unlike that dumb bat, so you can describe
your experiences to me, and yet I will still have no true understanding of the
experience itself._

This underlines the suggestion at the very end, we are lacking tools to
properly talk about experiences and maybe we could actually develop them. My
inability to describe to you, or probably even better to a blind person, what
it is like to see a red object may after all not be fundamental. Maybe there
is a way to communicate and in turn truly understand what using the tongue
device is like, we just have not yet discovered it.

~~~
red75prime
What it will be like? Series of sounds, which reprogram brain structure to
develop image processing networks?

I can't say it sounds plausible.

~~~
danbruc
Think about explaining general relativity, quantum chromodynamics, or just the
way a second level cache works to a random human from the early stone age.
Probably not an easy task. I am not sure if this is actually a fair analogy,
describing what seeing red is like is in some sense very different from
explaining the theory of general relativity. But on the other hand the
situations are also somewhat similar, the stone age guy just does not posses
the required language, probably to a large extend math, and knowledge and is
unable to look back on several thousand years of human history and innovation
that shaped us. Sure, it looks like a stretch to us that we will ever be able
to meaningfully convey the experience of seeing a red object, but I am not
convinced that this is a given. Intuitively I certainly agree with you, not
going to happen, but maybe that is just a lack of imagination.

------
woodruffw
I have a suspicion that future generations of philosophers will be taught
Nagel's _is like-be_ problem in the fashion that Hume's _is-ought_ problem [1]
is now taught. Although they concern completely different subjects, the
similarities between the two are striking:

1\. On what grounds can we make statements about what _ought_ to be based on
what _is_?

2\. On what grounds can we make statements about what it is to _be_ something
based on what it _is like_ to think about the experiences of that thing?

Edit: In case there's any confusion, I called it the _is like-be_ problem to
draw a similarity to Hume, not because anybody else calls it that.

[1]:
[https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem](https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem)

~~~
andkon
Yeah, I think this is probably true. Hume was trying to argue that normative
statements can't be reduced to or derived from natural statements about how
the world is; Nagel's fighting the reduction of consciousness to positive,
objective statements about consciousness, because of the existence of
phenomenal character. There's a certain way to be a bat that's wrapped up in
the very fact of its being a conscious being with a specific biology and
neurology, but which is still irreducible to those features.

If you wanna link this up to an older philosopher, this is basically what
Kant's on about in his Critique of Pure Reason. Our special human brain stuff
is responsible for providing us with the faculties (e.g. understanding of
things like causation) we have for understanding the world. We are always
trapped in our experience of the world, and though that shouldn't preclude us
from experiencing the world, you can't separate how we experience objective
reality from the fact that we experience it as humans, with human brains and
faculties.

~~~
woodruffw
Absolutely, and you've said it better than I did.

As a nit, wasn't Kant's Critique of Pure Reason written in part as a response
to Hume?

~~~
andkon
ooooh totally was. He was deeply affected by Hume's attack on causation, so
tried to come up with a coherent response. Ended up positing a whole bunch of
wacky and (I think) true things about our embedded cognition, so I think it's
probably fair to say that Critique of Pure Reason is the result of his beef
with Hume.

~~~
miobrien
In the CPR, Kant proposes a synthesis of both rationalism and empiricism:
although some knowledge comes from sensory experience, there are some things
that we just know -- e.g., pure mathematics. So I'd say it's the result of his
beef with everyone!

~~~
goatlover
Doesn't this all go back to Plato, where Plato thought knowledge depended on
the forms, because the flux of particulars cannot provide knowledge?

------
GeorgeKangas
The following two sentences, from the second page of the paper, are where he
really nails it:

"We may call this the subjective character of experience. It is not captured
by any of the familiar, recently devised reductive analyses of the mental, for
all of them are logically compatible with its absence."

All of today's science is logically compatible with the absence of subjective
experience, because science cannot even define what subjective experience is.

Every "definition" of subjective experience (SE, in what follows), is actually
an ostension: the reader's attention is directed to his own SE. For example:

    
    
      - Perhaps nothing actually exists -- except SE.  That's the one thing that MUST exist. [Descartes]
    
      - "There is such a thing, as what it's like to be..." [Nagel, in the posted paper]
    
      - There's such a thing, as what seeing red is like: a quale.
    
      - Science can't rule out that a behaviorally perfect duplicate of you, could lack SE (i.e. be a zombie).
        Science can't rule it out, but maybe SE does: why would a zombie vociferously assert the existence of SE?
        Those who claim SE is an illusion, i.e. that you actually are a zombie but don't know it,
        have the burden of explaining why zombies claim to have SE.
    

All of these "definitions" require an audience that has SE and is able to
recognize it. A scientific definition would not.

~~~
eli_gottlieb
>All of today's science is logically compatible with the absence of subjective
experience, because science cannot even define what subjective experience is.

That just means science is incomplete and we are ignorant, not that subjective
experience is metaphysically different from all other observed phenomena.

~~~
cassowary
It's compatible with both propositions. At the moment we literally have good
grounds for making either conclusion, although we have plenty of weak grounds
in both directions.

------
stabbles
What I like about Thomas Nagel is that he is one of the few atheists who steps
up against darwinism and materialism as an explanation for almost everything.

Quoting from The Last Word (1997):

"[...] I don't want here to be a God; I don't want the universe to be like
that. My guess is that this cosmic authority problem is not a rare condition
and that it is responsible for much of the scientism and reductionism of our
time. One of the tendencies it supports in the ludicrous overuse of
evolutionary biology to explain everything about life, including everything
about the human mind."

Of course his book Mind & Cosmos is the best example of this

~~~
amasad
I agree that darwinism is overused to explain everything but materialism seems
inescapable -- especially for atheists. I think even Nagel agrees with
"Inference to the Best Explanation" and so far I've never seen a single case
of anything better explained than the materialist explanation.

~~~
goatlover
Materialism is not inescapable for atheists in philosophy at all. You can be
anti-realist or an idealist sans God. You can also be a platonist. Or you
could be a dualist like Chalmers. There's plenty of room for different views
than materialism without committing to a belief in the supernatural.

~~~
amasad
I guess I meant physicalism then. Or maybe naturalism? I didn't mean to make a
claim on the philosophy of mind just that the world is simply the physical
world.

On the other hand dualism refitted for the physicalist world seems like a hack
at best. Panpsychism fails the inference to the best explanation or Occam's
razor etc.

~~~
goatlover
It depends on whether physicalism can account for all aspects of mind along
with abstract concepts and the laws of nature (or causation). To be a
successful monism, one has to be able to explain everything in terms of that
monism.

~~~
amasad
Yes, and many have done so. Marvin Minsky, Steven Pinker, and Daniel Dennet
come to mind. In philosophy lingo I think it all ends up being functional
physicalism.

~~~
goatlover
There's no consensus that Minsky, Pinker, Dennett have succeeded. Plenty of
philosophers and scientists have disagreed with their arguments. Dennett's
attempt rests on the controversial claim that we're actually p-zombies, and
consciousness can be understood in only functional terms. This is prima facie
absurd to many people.

~~~
amasad
Dennet's argument is simply an inversion of the causal chain: i.e mind is not
the causer but the cause. "P-zombies" is kind of a non-starter when you think
about it. I think LessWrong's essays on it shows that, here is one passage:

"you can form a propositional belief that "Consciousness is without effect",
and not see any contradiction at first, if you don't realize that talking
about consciousness is an effect of being conscious. But once you see the
connection from the general rule that consciousness has no effect, to the
specific implication that consciousness has no effect on how philosophers
write papers about consciousness, zombie-ism stops being intuitive and starts
requiring you to postulate strange things."

[http://lesswrong.com/lw/p7/zombies_zombies/](http://lesswrong.com/lw/p7/zombies_zombies/)

------
KingMob
I went to grad school to study the neuroscientific basis of consciousness, and
this paper is a classic. It inspired one of the weirder conference sessions
I've ever been to at ASSC, called "Is a fish conscious?"

Imagine a room full of philosophers and scientists arguing vociferously to
determine where (or even if) to draw the line between conscious/unconscious
organisms. (E.g., fish=probably, bacteria, probably not?)

~~~
CuriouslyC
A large part of the problem is that people don't all agree on the definition
of consciousness. It ranges anywhere from the property of having an internal
experience (which I subscribe to) all the way to self awareness, and even to
the ability for rational thought.

This - and arguments over the definition of "is" \- were the reason I stopped
reading philosophy of mind papers.

------
ForrestN
TLDR: please read every word, because this work is that brilliant and that
important.

------
sahadeva
This is a great essay, and I would highly recommend this as a counterpoint to
Nagel in terms of explaining consciousness ("qualia") if you want to get the
most out of Nagel's argument:
[http://www.newyorker.com/magazine/2017/03/27/daniel-
dennetts...](http://www.newyorker.com/magazine/2017/03/27/daniel-dennetts-
science-of-the-soul)

If Nagel things materialists can't explain consciousness Dennett thinks they
can. E.g.

"The obvious answer to the question of whether animals have selves is that
they sort of have them. [Dennett] loves the phrase 'sort of.' Picture the
brain, he often says, as a collection of subsystems that 'sort of' know,
think, decide, and feel. These layers build up, incrementally, to the real
thing. Animals have fewer mental layers than people—in particular, they lack
language, which Dennett believes endows human mental life with its complexity
and texture—but this doesn’t make them zombies. It just means that they 'sort
of' have consciousness, as measured by human standards." Joshua Rothman, New
Yorker, MARCH 27, 2017 - [http://www.newyorker.com/magazine/2017/03/27/daniel-
dennetts...](http://www.newyorker.com/magazine/2017/03/27/daniel-dennetts-
science-of-the-soul)

More detailed counterargument by Dennett: [https://www.amazon.com/DARWINS-
DANGEROUS-IDEA-EVOLUTION-MEAN...](https://www.amazon.com/DARWINS-DANGEROUS-
IDEA-EVOLUTION-MEANINGS/dp/068482471X)

~~~
danbruc
_If Nagel things materialists can 't explain consciousness [...]_

I do not think he makes this statement. He seems perfectly open to the
possibility of explaining subjective experiences in physical terms but he is
convinced that we are at least very far away from being able to do it. In
consequence he obviously considers all current attempts flawed and lacking.

~~~
sahadeva
I appreciate the pushback. Here is what Nagel says in his essay (emphasis
mine). I wonder if you still disagree?

"It is _impossible_ to exclude the phenomenological features of experience
from a reduction in the same way that one excludes the phenomenal features of
an ordinary substance from a physical or chemical reduction of it--namely, by
explaining them as effects on the minds of human observers. If physicalism is
to be defended, the phenomenological features must themselves be given a
physical account. But when we examine their subjective character it seems that
such a result is _impossible_. The reason is that every subjective phenomenon
is essentially connected with a single point of view, and it seems inevitable
that an objective, physical theory will abandon that point of view."

~~~
danbruc
Let me counter with this quote.

 _If we acknowledge that a physical theory of mind must account for the
subjective character of experience, we must admit that no presently available
conception gives us a clue how this could be done. The problem is unique. If
mental processes are indeed physical processes, then there is something it is
like, intrinsically, to undergo certain physical processes. What it is for
such a thing to be the case remains a mystery._

I think the critical point in your quote is the last sentence.

 _The reason is that every subjective phenomenon is essentially connected with
a single point of view, and it seems inevitable that an objective, physical
theory will abandon that point of view._

Sure, my experience of looking at a red object is fundamentally my experience,
but there seems to be no obvious reason why we could not abstract me away and
talk about the experience of an arbitrary human seeing a red object. This is
also in line with the suggestions at the very end, trying to develop the tools
to talk about experiences in an objective manner.

~~~
sahadeva
I don't think I fully understand what Nagel means by developing objective
tools to describe experiences, I should re-read the full article a few more
times. I think, though, that you're exactly right: We _do_ have a shared
understanding and experience of what its like to see a red ball, and we can
talk about it in the abstract. My reading of that is that because we do have a
shared human experience there can and indeed must be a physical account
(Nagel's term) of it. Nagel's argument brings in a wedge of doubt about
whether that shared experience itself is real by saying, at least partially,
that because we can't explain what it is like to be a bat (bat point of view)
in human terms we won't be able explain what consciousness is for humans _or_
bats with a physical account (at all or at least right now). Dennett, I think,
says that that is more or less backward, and that all the means is there are
different kinds of consciousness. In the article I mention he says,

 _" The big mistake we’re making,” [Dennett] said, “is taking our congenial,
shared understanding of what it’s like to be us, which we learn from novels
and plays and talking to each other, and then applying it back down the animal
kingdom. Wittgenstein”—he deepened his voice—“famously wrote, ‘If a lion could
talk, we couldn’t understand him.’ But no! If a lion could talk, we’d
understand him just fine. He just wouldn’t help us understand anything about
lions.”

“Because he wouldn’t be a lion,” another researcher said.

“Right,” Dennett replied. “He would be so different from regular lions that he
wouldn’t tell us what it’s like to be a lion. I think we should just get used
to the fact that the human concepts we apply so comfortably in our everyday
lives apply only sort of to animals.” He concluded, “The notorious zombie
problem is just a philosopher’s fantasy. It’s not anything that we have to
take seriously.”_

I found that convincing, I'm curious if you do as well?

~~~
danbruc
I agree insofar as that being a lion or a bat is probably really different
from being a human. It is a pity that I can not remember what it felt like
when I was only one or two years old, that might provide a glimpse at the
difference. In all the the time I can remember I was not fundamentally
different from now, or at least that is how I remember it. I would really like
to know how it was at a very young age, not speaking and understanding a
language, not recognizing me in a mirror, maybe not even being aware of my
existence.

Anyhow. I did not read the entire Dennett article when it was posted here a
few days ago, maybe I should, but it was just not compelling to me [1], at
least as far as I got. What I got from the part I read is that he seems to do
exactly what Nagel warns of, dismissing the experience of being a human. I
find the comparison with a computer much more interesting than the comparison
with animals. What if we build an artificial neural network resembling a human
brain? If that is not good enough, what if we perform a molecular simulation
of a brain? Or even a quantum physical simulation of a brain if molecules are
still not good enough, but personally I doubt that.

But what if? Does this artificial brain experience what it is like to be a
human? As a pysicalist I think the answer is yes. But just as Nagel says, I
have no idea how this could possibly work, how the transistors in my computer
could go from controlling the flow of electrons by mindlessly following
physical laws to being aware of their existence in a universe, seeing red,
feeling joy and pain. What if I replaced the computer with a mechanical one
made out of billions and billions of cogwheels? With stones on a beach
simulating a Turing machine? With a gigantic printed look-up table mapping all
possible inputs to their outputs?

I can not think of any good reason why the stones on the beach - together with
someone or something moving them around to perform the computation - should be
any less conscious than the human brain they are simulating. And this seems of
course absurd. Thinking about this is what gets me the closest to becoming a
dualist or something like that. There seems to be not even the tiniest bit of
hope at the horizon to even be able to attack this problem from a physicalist
perspective. So when Dennett says that there is no problem, assuming he
actually says this, then I must disagree.

[1] I had prior exposure to Dennett and, as far as I remember, quite liked
what he had to say but somehow not this time. Maybe the topic was a different
one, maybe it is just the way the article is written, maybe I should just read
the entire thing.

P.S. I just did some more reading on Nagel, it seems you are at least more
correct than me. He seems not as open to a physicalist account of
consciousness as I thought but the details are hard to tell without actually
reading more of his works.

~~~
red75prime
> I can not think of any good reason why the stones on the beach - together
> with someone or something moving them around to perform the computation -
> should be any less conscious than the human brain they are simulating. And
> this seems of course absurd.

It will take around 3,000,000,000,000 years and enormous beach to simulate one
second of brain activity. It is literally unimaginable. What do you imagine
when you talk about absurdity? Is it some small scale model, which is laid
bare before your mind's eye in all its simplicity, leaving no place for
consciousness to hide?

~~~
danbruc
I certainly see a beach with a few hundred or so stones when I close my eyes
but I don't think that matters. It is simply the idea that stones on a beach -
a lot of stones on a very long beach in a very specific arrangement relentless
reordered by Tom Hanks for billions of billions of years according to a very
long list of rules overseen by Wilson - could really feel joy and pain that
seems absurd. I mean this is a common argument that it is just the sheer scale
that would be required and that we are unable to imagine that leads us astray
but in the end it just seems wrong that stones on a beach can feel pain, at
least to me. But if you think about a computer simulating a brain at the level
of neurons, something that is somewhat in reach, does this make it any easier?
Does it sound so much less absurd that a data center packed with GPUs could
really feel pain?

~~~
dwaltrip
Stones don't communicate among themselves, and only change state under the
entirely external effort of the entity that moves them. I would argue that the
stones would need some mechanism for modifying their own state to even have a
slim chance at consciousness. Seeing as stones are inatimate objects that
can't possibly operate any mechanism, I think the idea is dead in the water.

Another way of looking at it: the significance of any particular arrangement
(or sequence of arrangements) of the stones is only meaningful in the mind of
the entity that is moving them around. Or perhaps any nearby viewers with the
patience and far-fetched ability to make sense of the iterations of stone
arrangements. The internal/external distinction between the stones themselves
and the stone movers/viewers seems critical to me.

Software on the other hand... that is a bit harder to categorically dismiss. I
think I can imagine software that produces an experience somewhat analogous to
the human one.

~~~
danbruc
You need of course something moving the stones and interpreting the
arrangement but there is no need to bother Tom Hanks with that. Just throw in
a Roomba pushing the stones around according to the state transition function.
Make it a real quick one so that something meaningful can happen before the
sun dies. And for good measure throw in a simple humanoid robot with sensors
and actuators from wich the Roomba receives inputs to the computation and to
which it sends control signals decoded from the arrangement of the stones.

Now that is not just a pile of stones, but nothing of the added things seems
to add much complexity. A robot pushing stones according to predetermined
rules can be very simple. Even simpler than a Roomba would be a gantry crane
above the stones, it could essentially be just a few motors, a claw, and a
switch to detect the presence or absence of a stone. I also just realized that
the state transition function would not be an unimaginable monster with the
possibility to hide something in there. You do not need much code to simulate
a neural network regardless of its size and it would probably not grow that
much when encoded for a Turing machine.

Now which part of the stones and the crane feels pain and anger if a loved one
dies? And we are not looking for some stones signaling certain muscle activity
or the production of tears, we are looking for the internal experience of
pain. Based on my believes I seem to be forced to accept that those stones can
somehow be conscious and feel emotions even if it seems hopeless to understand
how this works. But this also has a possibly even more disturbing consequence.
If piles of stones can be conscious, what prevents other objects from that?
What about stars in galaxies? What does it feel like to be a galaxy?

 _Software on the other hand... that is a bit harder to categorically dismiss.
I think I can imagine software that produces an experience somewhat analogous
to the human one._

I can not, no matter how hard I try. I can imagine a software faking human
experiences, to say it feels joy or pain, I can not imagine it to actually
feel it. Not at last because I can not even really say what the difference is.
It seems to me that once I could imagine this for a software it would only be
a small step to imagine the same for a pile of stones. The difference between
a human and some software seems enormously larger than the difference between
some software and a pile of stones, at least to me.

------
dvt
Great paper that perfectly exemplifies what a "modern" philosopher does and
thinks about on a daily basis. Unfortunately, it's also a paper that is often
read only in lower-level philosophy courses. In my opinion, one needs to have
a very deep understanding of the Philosophy of Mind corpus before being thrown
into this paper because an uninitiated reader may be tempted to think that
Nagel is obviously right on X or obviously wrong on Y, but his points are very
nuanced.

Nagel is also making some pretty big claims, specifically about the "private"
nature of experience. Anscombe (whom I also loved reading) makes similar
arguments. Philosophy of Mind was never my forte (I'm a logic and ethics guy),
but reading Nagel was always a breath of fresh air in what I think is a
subfield marred by unnecessary technicality and equivocation.

------
dghughes
I just found out yesterday about how amazing the immune system of bats is they
are practically immortal almost nothing makes them sick!

But then it's confusing that bats in my region are being killed off by white
nose syndrome. Fungus on their nose wakes them from hibernation and causes
them such stress it eventually kills them.

Which actually does make sense since the theory is bats do so well immunity-
wise because of the large swing in their body temperature range. A bat's
temperature drops very low when hibernating then when they fly it shoots up
very high.

[http://www.popsci.com/bats-immune-systems-are-totally-
unique](http://www.popsci.com/bats-immune-systems-are-totally-unique)

------
goatlover
It really comes down to objectivity vs subjectivity. What properties are
objective features of the world, if any, and which ones are either individual
or creature specific?

And if science is our best attempt at creating an objective account of the
world, how do we include the subjectivity of creatures different than us in
that account?

If we can't, then science is incomplete, because the world isn't entirely
objective. Also, if the objective (shape, extension, number, etc) is created
by abstracting away from the subjective (color, smell, feel, etc), then you
can't use the objective to explain the subjective, although you can correlate
the two (certain brain processes result in certain subjective experiences).

~~~
visarga
In our days we learned how to do seemingly "subjective" feats with objective
means (neural nets). It's not such a big, insurmountable divide any more.

~~~
goatlover
You mean the neural nets reported being puzzled by consciousness? Otherwise, I
think you're conflating subjectivity with cognitive abilities, which need not
be subjective at all.

------
anonemouse145
An interesting development for anyone who wants to take Nagel seriously is
Mary and the colorblind room. Short version, a girl grows up in a black and
white room (which is impossible in a lot of ways but just imagine) and that
she is fed a database that lets her learn everything there is to know about
the physical world of color (which is impossible but follow along) and then
one day she exits the room and she sees a red rose. Nagel argued there's
something about Mary's experience of color that she had in that moment, that
she did not learn in the room.

Dennett is another strong opponent here. He says Mary would retort, aha but
you have a limited imagination. I know everything about color, so nothing
about that experience was new to me in any non trivial way whatsoever. Because
Mary knows EVERYTHING about color. It's may be impossible for us to imagine
someone being omniscient about color, and that spoils the whole argument.

Similarly, what if we knew every physical fact about bats. Dennett only needs
to say, we have small imaginations. We have trouble imagining that what it is
like to be a bat would be the sum of all physical facts about a bat. But we
can't be sure of that. There is no logical certainty. Hence Nagel proved
nothing.

Yes you end up saying "we're all robots/zombies/unconscious animals" this way,
and there's actually nothing special about being human it's just a myth we
tell ourselves. But aside from some people not finding that conclusion
tasteful, there's nothing wrong with it.

------
RockyMcNuts
Humans can actually get pretty good at echolocation

[http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/Echo.pdf](http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/Echo.pdf)

[https://www.ted.com/talks/daniel_kish_how_i_use_sonar_to_nav...](https://www.ted.com/talks/daniel_kish_how_i_use_sonar_to_navigate_the_world)

~~~
larsiusprime
Nagel acknowledges this in the article, for what it's worth:

Footnote 8, page 9:

> 8 It may be easier than I suppose to transcend inter-species barriers with
> the aid of the imagination. For example, blind people are able to detect
> objects near them by a form of sonar, using vocal clicks or taps of a cane.
> Perhaps if one knew what that was like, one could by extension imagine
> roughly what it was like to possess the much more refined sonar of a bat.
> The distance between oneself and other persons and other species can fall
> anywhere on a continuum. Even for other persons the understanding of what it
> is like to be them is only partial, and when one moves to species very
> different from oneself, a lesser degree of partial understanding may still
> be available. The imagination is remarkably flexible. My point, however, is
> not that we cannot know what it is like to be a bat. I am not raising that
> epistemological problem. My point is rather that even to form a conception
> of what it is like to be a bat (and a fortiori to know what it is like to be
> a bat) one must take up the bat's point of view. If one can take it up
> roughly, or partially, then one's conception will also be rough or partial.
> Or so it seems in our present state of understanding.

------
visarga
To be a bat, or a woman, or man, means to be an agent that exists as part of
an environment, learning to select the best actions that maximize rewards.

Of course for a bat the list of possible sensations, actions and rewards is
different than for men or women. But what doesn't change is that they are all
agents acting by reinforcement learning, trying to survive.

I'm wondering why philosophy doesn't take this stance. Is it because it sounds
too similar to behaviorism? Does it seem reductionist? Instead of using the
parsimonious concept of reinforcement learning agent, they use hard to define
words such as consciousness and self. Instead of looking at what matters -
reward maximization, survival - they analyze qualia and Chinese Rooms.

Philosophers, why are you ignoring recent AI research? Isn't it a waste of
time to use such intuitive concepts as consciousness, free will and self? If
only you could have come to a definition of consciousness you agree upon, but
you can't, because it's a reification, a suitcase concept.

~~~
eli_gottlieb
Lots of cognitive science literature models human beings as reinforcement
learners, with plenty of good evidence for the hypothesis. The interesting
question is: what _kind_ of reinforcement learning?

------
username223
Whoa, time warp to a previous life...

Whatever you decide about his beliefs, Nagel is a wonderful writer. Pick up a
copy of _Mortal Questions_ if you get a chance.

------
tim333
It's interesting that we're getting fairly close (a decade or three?) to being
able simulate brains and the we could combine human and bat simulations to be
able to check out what it's like.

------
EGreg
Thomas Nagel teaches at NYU, right?

~~~
miobrien
Yup.

------
hit8run
No need to read the paper. Just ask Batman.

Always be yourself, unless you can be Batman. Then always be Batman!

... just kidding. Interesting read anyways.

------
yashksagar
somebody show this to Nolan, about time we got another Dark Knight movie.

