Hacker News new | past | comments | ask | show | jobs | submit login
What is it like to be a bat? (1974) [pdf] (warwick.ac.uk)
61 points by bookofjoe on May 3, 2023 | hide | past | favorite | 116 comments



> But bat sonar, though clearly a form of perception,is not similar in its operation to any sense that we possess, and there is no reason to suppose that it is subjectively like anything we can experience or imagine.

Philosophy aside, bat sonar is different from the senses we possess in an really interesting way. Our eyes have excellent spatial resolution (up/down, left/right), some rough depth resolution (from stereo), and no innate sense of speed. Our brain processes the signal to fake even better spatial resolution, infer more about depth (small vs far away) and more about speed (angle changes, among other things).

Bat sonar is completely different. Spatial resolution is poor. But they have first-class depth and speed information! They don't necessarily know where something is, but know exactly how far away it is, and how fast that distance is changing. One must suppose that their brains synthesize more spatial information from these senses, but that spatial information is still not going to feel reliable.

I'd love to be able to experience that for an hour. To live in this world where distance and speed are primary senses, and cross-range information is much fuzzier. What an incredibly different way to see the world.


Humans unconsciously echolocate. A lot of claims of "blindsight" turned out to be unconscious echolocation (obstructing hearing killed the blindsight.)

> Researchers from the 1940's through the present have found that normal, sighted people can echolocate - that is, detect properties of silent objects by attending to sound reflected from them. We argue that echolocation is a normal part of our perceptual experience and that there is something 'it is like' to echolocate. Furthermore, we argue that people are often grossly mistaken about their experience of echolocation. If so, echolocation provides a counterexample to the view that we cannot be mistaken about our own current phenomenology.

How Well Do We Know Our Own Conscious Experience? The Case of Human Echolocation

https://faculty.ucr.edu/~eschwitz/SchwitzAbs/Echo.htm


Interestingly enough, humans have been able to use echo location, primarily in blind individuals. https://en.m.wikipedia.org/wiki/Human_echolocation


Yeah! As far as I can tell from the research, what's going on there is primarily getting range information from the amplitude of the return (rather than delay or phase like "real" sonar uses). But still, it's very useful our brains are elastic enough to allow us to develop new senses!


The way you put it makes it sound almost like bat sonar is some kind of Fourier transform of vision. Like solving a physics problem by transforming the position space into momentum space. Cool stuff :)


There are video games based on the concept of echolocation, but the ones I know implement it a bit like a pulsing flashlight in a wireframe style scene. Probably not at all how it would feel like.

Maybe attempting a more realistic depiction of echolocation could lead to interesting gameplay. There are already some games interesting games based on lidar.


I am increasingly of the opinion that the phrases "what it is like to be a bat", or "there is something it is like to be a bat", are simply a linguistic sleight of hand masking a plain old dualistic standpoint. Just because these sentences make sense in our everyday language, does not mean they are suitable for technical and rigorous philosophical discussions. For one, the "something" in the 2nd formulation (or simply the answer asked after by the 1st formulation) is most readily interpreted as an object, closing the door to any process-like interpretation of consciousness. Also, "being like something" is strictly a judgment made by a single subject regarding two experiences of that same subject, so it is not at all clear that this is a relation which can be validly posited to exist between two distinct subjects' experiences. I guess my point is I can easily imagine a complete physicalist explanation of consciousness which would still not lead to a valid answer of "what is it like to be a bat" due to quite obvious limitations of language or the plain invalidity of the question.

Edit: not to mention the reliance on a dummy pronoun: what is it like to be a bat - what is what like to be a bat?

Also, you'll notice in many subsequent discussions that refer back to this paper there's a strange reliance on repeating these exact formulations. If there really was some insight here it would be possible to phrase it in different ways.


>a linguistic sleight of hand masking a plain old dualistic standpoint.

These terms are getting at something central to consciousness, the fact that there is a conceptual duality between how we conceive of it from the first-person and how we conceive of it from an objective standpoint. We can't disavow this conceptual duality, a theorist offering an explanation of consciousness that doesn't capture this dual nature of the phenomenon will be rightly considered eliminating the explananda.

But a conceptual duality does not imply an ontological duality. In other words, the fact that we conceive of consciousness in these seemingly opposing ways does not imply two separate phenomena. The term dualism has become a shibboleth to be avoided in serious philosophy of mind, but this is a mistake. A satisfying explanation of consciousness must offer some phenomena that carries a resemblance to our personal datum as experiencers of sensations. This must then be related to the scientific story of how electrical signals are transformed into behavior. This just is the problem of consciousness. Anything less misses the point.

>For one, the "something" in the 2nd formulation [...] is most readily interpreted as an object, closing the door to any process-like interpretation of consciousness.

I agree that the language we use in describing consciousness is unfortunate and has done real damage to what we consider as promising avenues for investigation. We are cognitively biased towards conceptualizing the world in terms of "things" and so we expect our explanations to also be in terms of things. When consciousness isn't found in thing-ness we are tempted to posit a new kind of thing that carries the conscious properties. But we've been lead off course by our initial conceptualization. I'm in favor of seeing objects as processes rather than discrete units. Consciousness is likely in the active dynamics rather than any static property.

>I can easily imagine a complete physicalist explanation of consciousness which would still not lead to a valid answer of "what is it like to be a bat" due to quite obvious limitations of language or the plain invalidity of the question.

Yeah, we will never know "what its like" to experience the existence of another living creature. But this is just a limitation of physical descriptions. This isn't a demerit of physicalism or materialism as a methodology. This is no reason to turn to alternative methodologies that can only hope to offer pseudo-explanations of consciousness at best.


> A satisfying explanation of consciousness must offer some phenomena that carries a resemblance to our personal datum as experiencers of sensations.

What, in your opinion, would make for a satisfying explanation of consciousness? I think another nontrivial piece of the puzzle is that it's hard to even know what we are looking for. There are many philosophers who argue (convincingly IMHO) that it doesn't make sense to posit a hard problem of consciousness in the first place.


Yeah, its tough to know what a good explanation would even look like. There are so many ways for one thing to resemble another, it's hard to conceive of a new class of resemblance prior to being given an example of it. Resemblance can also depends on one's prior commitments. So its a very dynamic and context dependent property. I don't think there is much hope in identifying what a satisfying explanation of consciousness will look like prior to be presented with one.

That said, I can offer what I take to be a narrowing of the target of promising avenues of investigation. We need a new way to conceptualize existence outside of "thing-based" ontologies. A process ontology would be heading in the right direction. This will perhaps give us the tools to conceptualize a recurrent information-rich dynamical system as a thing in itself (rather than as a collection of individuals with some dynamical behavior). Then we can ask how distinctions are presented to the system on which its behavior and decision-making is determined. A represented distinction is predicated on it being like something or something else to the consumer of the representation such that states can be distinguished. We wouldn't necessarily subjectively resemble this system, but we may recognize our epistemic situation regarding being the target of represented distinctions so that we are confident there is something it is like to be that system.


>are simply a linguistic sleight of hand masking a plain old dualistic standpoint

Are you assuming that dualism is invalid? If so, why? There is something to the distinction between physical reality and subjective experience that so far no-one has managed to explain.

>I can easily imagine a complete physicalist explanation of consciousness which would still not lead to a valid answer of "what is it like to be a bat"

Then it would not be a complete physicalist explanation of consciousness. A complete physicalist explanation of consciousness, by definition, would have to account for subjective experience/qualia/whatever you want to call it.


I think one can salvage the distinction between physical reality and subjective experience without positing an abstract ("Cartesian") dualism. It just so happens that most experiences can be categorized as being either external or internal, so we are led to believe that every experience fits into one and only one bucket. But I think that there are plenty of experiences that are not so easily categorized (e.g. feelings).

Demanding that there must be a perfect partition (i.e. assuming dualism) is an additional requirement, but it's not obvious that it should be a good requirement for a sound philosophical theory. In fact I believe it not to be sound, and I believe many philosophical hard problems come from bending over backwards trying to impose this condition.


>Are you assuming that dualism is invalid?

I am only noting a reliance of Nagel's bat arguments on dualist assumptions. Those who reject dualism can then draw their own conclusions ;)

>Then it would not be a complete physicalist explanation of consciousness. A complete physicalist explanation of consciousness, by definition, would have to account for subjective experience/qualia/whatever you want to call it.

For sure a physicalist explanation must somehow deal with these things, but what I mean is that it could turn out that the question "what is it like to be a bat?" is simply incoherent. If a question is in principle unanswerable, self-contradictory or nonsensical, then we can't very well demand an explanation.


> phrases "what it is like to be a bat", or "there is something it is like to be a bat",

> are simply a linguistic sleight of hand masking a

> plain old dualistic standpoint

philosophers have differentiated these into three different questions

"What is it like to be?" has to do with the nature of consciousness.

"word games" has to do with any use of language, nothing to do with consciousness. You could make a word-game critique of any statement.

"dualism" comes in a number of forms, not clearly related to one another (physical body vs soul/spirit, earthly realm vs heaven; mind-body, consciousness vs quantum chemistry) but all having a similar problem. Any place there is posited a dualism, then what is the interaction between the two duals, how could one even perceive the other?

but declaring "there is no dualism", while eliminating that problem, does not eliminate the question as to why it was posited in the first place: why (or how) does it feel like anything to be conscious, feel pain, etc. Saying "that's what evolved" is just hand waving. What's the difference between being alive and dead? Do rocks have a little bit of consciousness?

My personal preference (lifelong atheist-science type) is that the "abstract" world is all that exists, there is no physical world. Everything we study in physics and chemistry we arrive at via abstract mathematical values, relationships and computation. I think that is the nature of the universe, and while it doesn't "solve" the consciousness problem, I feel like it moves the goalposts in the right direction.


>"word games" has to do with any use of language, nothing to do with consciousness.

I think philosophy of mind is especially prone to these issues because of how deeply the concept of "subjectivity" is embedded into language (not only at the level of semantics but also grammar). You can hardly say 2 words without pulling in a whole bunch of preconceived notions of what a subject is and how subjects relate to each other.


The paper is clearly asking what the difference in experience is between humans and bats. Whether a process is a "thing" or not is kind of beside the point. The core question is can we articulate how the core experience of being human is in comparison to the core experience of another creature that perceives the world in a fundamentally different way.


Even when you phrase it that way, you would seem to posit that an experience has a how it is. How something is is ultimately a judgment made by an experiencing subject. In that case, doesn't the question of comparing two "how it is"-judgments regarding experiences of different subjects become incoherent? On the one hand it implies a single experiencer who has two experiences and compares them, but on the other hand we already know that the two experiences we want to compare have different experiencers.

So (forgive me if I'm projecting too much on your particular phrasing :)) here we are still subtly relying on an implied "thing"-ness of an experience, namely as an object which can be extracted out of its original subject and transposed onto another subject.


I feel like you're being purposefully obtuse to not be able to say the human experience is fundamentally different than a bat's experience, even if it's difficult to articulate a bat's unique experience in a human language.

Edit: let me expound on that, as I'm not just being difficult. What does this questioning actually get you? The question of how a bat experiences the world vice a human gives me a meaningful thought experiment about what is consciousness and what the limits of our perceptions are. Asking whether or not the words are meaningful gives me nothing because the meaning of the article is so intuitively clear. In other words, questioning whether the words have meaning leads to a less meaningful experience vice using my intuition to understand the meaning to my interpretation.


For me, the point is not to avoid having to admit some kind of difference between human and bat experience (clearly they are different!). But I feel like the fact that this question ("what is it like to be a bat?") is hard (or even impossible) to answer is used in support of the thesis that there is something non-physical about human experience. By analyzing what this question really means in a technical and rigorous sense (and if it's even a coherent question at all), I only want to push back against it insofar as it is used to support that conclusion.


It actually supports the opposite conclusion for me, the fact that physical differences causes experiential differences suggests experience, and therefore consciousness has a physical basis.


When I have had conversations with my philosophically oriented friends, I like to talk about what it be like to be a starfish--to experience the whole world in 5-way symmetry.


Bats possess a sensory organ we do not. "What it's like" is just a way of saying they may be conscious of a sonar sensation which we utterly lack, similar to a person blind from birth lacking color sensation. To use technical philosophical jargon, bats have sonar qualia that humans do not, assuming bats are conscious. We cannot say what that sensation is, since we don't have it. This places a limit on our knowledge. Don't let "what it's like" trip you up.

It's a legitimate philosophical problem which is spelled out in Nagel's paper. It's not a problem with language, it's rather a limitation on our experience, which also highlights a limitation of our epistemology.


I don't understand what you mean by sleight of hand. It seems a very straightforward question that makes sense in our everyday use of language, as you admit. Just because it's difficult to rigorously analyze this statement into scientific concepts, it doesn't follow that the question is invalid. In fact the point of the question is to show shortcomings in our current science.

Also, I would dispute the assertion that there is something unique or special about this formulation. There are many synonymous ways of phrasing the question: e.g., describe the subjective experience of a bat.


> I don't understand what you mean by sleight of hand.

He means that it implicitly smuggles in a certain conclusion. For instance, "I think therefore I am" seems logically sound, but actually begs the question in presupposing "I" to then conclude that "I" exists.

Or ask an innocent person a question like, "when did you stop beating your wife?"

> There are many synonymous ways of phrasing the question: e.g., describe the subjective experience of a bat.

If you can describe a subjective experience in a way that is not circularly tied to experiencing it, is it really subjective experience? If you can formulate an objective description, then the subjective experience was an illusion all along, because "subjective" doesn't mean what we think it means, ie. "non-objective".

This is the linguistic game the OP is referring to. Natural language can lead you into all sorts of traps like, thinking there's something there but it's really just a conceptual mirage we've sort of invented.


> He means that it implicitly smuggles in a certain conclusion. For instance, "I think therefore I am" seems logically sound, but actually begs the question in presupposing "I" to then conclude that "I" exists.

That's not correct.

It means if something-- anything-- is in the act of reflecting about thinking-- that is, reflecting about thinking about anything at all, including questioning existence-- then that thing exists only in that it is an entity capable of reflecting upon its own existence. And only during the act of reflecting on thinking is this true. And, most importantly, this notion is ineluctably cordoned off from any and all evidence-based logic which requires potentially illusory sensory input.

The part in italics came from others who read and critiqued Descartes. In any case, his basic logic is sound. Hume did the clearest job of critiquing it, and even he didn't claim Descartes had made a logical fallacy here.

It's been awhile since I've read it, but Descartes probably implied his notion was more powerful than it turns out to be-- i.e., that he could build an epistemology on it. Nevertheless, the basic notion is certainly not a logical fallacy.


> It means if something-- anything-- is in the act of reflecting about thinking-- that is, reflecting about thinking about anything at all, including questioning existence-- then that thing exists only in that it is an entity capable of reflecting upon its own existence.

Still presupposing an entity. Why would a thought need an entity at all to think it? Don't you see that this is an implicit assumption that hasn't been justified? Why can't thoughts simply exist without a thinker? A thought can certainly refer to itself, much like a mathematical expression can be recursive.

Here's the fallacy free version: this is a thought, therefore thoughts exist. No entity implied or needed.


I'm afraid this is far too clever for me to understand. I know what I mean by subjective experience, and no amount of linguistic hair-splitting will convince me it doesn't exist.


You mean you think you know. If I put an object in your blind spot, you'll also swear up and down there's nothing in front of you.


I think everyone knows what they mean when they refer to their own subjective experience. That is entirely separate from the question of what that experience corresponds to in the external world. If you put an object in my blind spot, I know that my subjective experience will be of no object. I couldn't make that statement if I didn't know what I meant by subjective experience.


When you say, "I know what I mean by subjective experience, and no amount of linguistic hair-splitting will convince me it doesn't exist", you are not simply saying you are perceiving X and that, even if X is an illusion, X at least refers to a persistent and predictable illusion, and so you know what you mean by anytime you refer to X.

You are actually saying is that X corresponds to something real, that you are directly perceiving some aspect of reality, because how else could you conclude that nothing could convince you that X doesn't exist?

It is to that, that I say no, you don't know what you mean by subjective experience.


Are you asserting you don't have any subjective experience? Or that you only feel like you have a subjective experience and it doesn't actually exist?


I am arguing that subjective experience is not what we perceive it to be. The qualities that we perceive of it are deceptive, and not necessarily reflective of anything real.


>The qualities that we perceive of it are deceptive, and not necessarily reflective of anything real.

This claim only makes sense given a particular definition of "real", but if (the qualities of) our subjective experiences are outside of that definition, why should we take (the qualities of) subjective experience to not be real, rather than the definition to be impoverished? What is real should encompass every way in which things are or can be. The qualities of subjective experience included.

The problem isn't with taking subjectivity to be real, but with taking everything that is real to be object based. There are no qualia "things" in the world. But we should not see this as implying there are no qualia.


Would you take a "day job" to be ontologically real? It is a way in which the aggregate of particles that make up your body regularly behave on a semi periodic schedule. That would seem to fit your definition of "encompassing every way in which things are or can be".

If it is real, isn't there still a need to distinguish ontological primitives from aggregate properties like the above? Why shouldn't this be what we mean by "real"?


>Would you take a "day job" to be ontologically real?

I do. I'm quite pluralistic with what I deem "real". Quarks are real as well as chairs and day jobs. Roughly speaking, I take all fundamentals and all invariants in space and time over the fundamentals to be real. Invariants seem to be attractors in conceptual space that are apt to be picked out and labeled by cognitive systems like us. These invariants play various explanatory roles in our conceptualization of the world, and so they are real.

>If it is real, isn't there still a need to distinguish ontological primitives from aggregate properties like the above? Why shouldn't this be what we mean by "real"?

Definitely. I just use the qualifier fundamental to make that distinction. Real is a term that plays a key explanatory role in our conceptualization of the world and so how we define it for the purposes of theory should respect this pre-theoretical usage. The idea of ontological primitives distinct from everyday existence is a result of theory and so should use a distinct term. When people say chairs exist, they mean it in this broad pre-theoretic sense. There's no reason to blow that up.


> When people say chairs exist, they mean it in this broad pre-theoretic sense. There's no reason to blow that up.

I'm not sure switching to, "qualia do not fundamentally exist", really buys us much. Saying they're illusory is already acknowledging the existence of some process that yields a false conclusion.

Pluralistic existence also seems to inevitably run head first into the Sorites paradox.


What you gain is not having to defend your terminology or confusions derived from disagreements on the meaning of key terms. Following Kieth Frankish on twitter, it seems like he spends far more time defending against misconceptions about illusionism than actually defending the content of the theory. And it's an entirely self-inflicted wound. (Though it works for him as it raises his h-index.)

When your terminology results in you saying things like "consciousness is an illusion (doesn't exist)" and "the existence of qualia (features of our subjective experience) is false", you're just undermining your own project. I mean, you're literally engaging in a verbal dispute with the other guys in this thread. I just don't see the point. There would be far more agreement if we could align the usage of key terms with how most people understand them.


I agree there would be far more agreement, in the sense that fewer people would object and less forcefully, but I think it would be because they don't understand the substance.

If I explain that chairs don't really exist because there's no such thing in physics, I get nodding heads all around. If I say the same about consciousness, people are all up in arms.

The challenge here is the implicit assumption that their perception of subjective experience is a direct perception of reality, where they can accept that their perception of chairs is mediated by other things.

This is the assumption that must be challenged and I think your approach just lets people think they understand a position when they really don't.


> I think it would be because they don't understand the substance.

Fair point. This is a danger.

>If I explain that chairs don't really exist because there's no such thing in physics, I get nodding heads all around. If I say the same about consciousness, people are all up in arms.

The difference is that with chairs people immediately know you're speaking in jargon. No one in their right mind would say something like "chairs are an illusion". This isn't the case when it comes to phenomenal consciousness.

>The challenge here is the implicit assumption that their perception of subjective experience is a direct perception of reality

I'm in favor of challenging these kinds of assumptions. But saying phenomenal consciousness doesn't exist isn't a good way to do it. It (correctly) invites such strong resistance that it makes communicating the more subtle point nearly impossible. Although perhaps there really isn't a more subtle point in the case of illusionism. Frankish views seem to have evolved towards a straightforward eliminativist account, which is disappointing. I was originally very sympathetic to illusionism when I first looked into it, but it is much less appealing now. I go into some detail about my problems with it here[1] if you're interested.

[1] https://www.reddit.com/r/naturalism/comments/zr6udy/a_challe...


> No one in their right mind would say something like "chairs are an illusion". This isn't the case when it comes to phenomenal consciousness.

Au contraire! I did something like this recently by arguing that solidity is basically illusory. It went ok.

Solidity simply doesn't have the properties that we naively attribute to it given our perceptions (even solids are mostly empty space!), and analogously, neither does our qualitative experience. The qualities we attribute to solids simply changed as we understood more of what was going on.


But this is just more jargon. At the end of the day, you're still going to sit on a chair to rest your legs and fully expect that your backside remains off the floor. But you have no expectation that an illusory cup of water will quench your thirst. Your interlocutor knows this and so feels no need to press you on terminology. This is an example of where the jargon obscures the meaning. I feel like something similar is going on with illusionism. Frankish wants to use the illusion jargon, but still make use of the fact that what's being picked out by phenomenal properties has explanatory efficacy in the world (at least before I started to read him as a plain old eliminativist).


“ Just because these sentences make sense in our everyday language, does not mean they are suitable for technical and rigorous philosophical discussions. ”

Where would one find the authority to say what is suitable or not for philosophic discussions then? This is where schools of thought arise from, because some are less afraid of the unknowns that arise under various axiomatic constraints.

Every axiom is a mystery of existence itself in any manner.


One excellent book on consciousness is Mcginn's The Mysterious Flame [1]. I knew consciousness is a hard problem, but the book made it clear how baffling and utterly mysterious it is. I am still absolutely flabbergasted when I think about it (how on earth does consciousness arise from material "meat"?! [2] Where are pain and color and subjective experience really located in the material universe?). It also made me skeptical of people who think AI will be sentient [3] while we are in the complete dark about consciousness in biological organisms.

[1] https://www.amazon.com/Mysterious-Flame-Conscious-Minds-Mate...

[2] https://www.mit.edu/people/dpolicar/writing/prose/text/think...

[3] https://twitter.com/lexfridman/status/1653051310034305025


> Where are pain and color and subjective experience really located in the material universe

My take on this is that you are thinking about it wrong to conceptualize the experience of color as a discrete thing. Let's start with a different example. When I look at a picture of, say, Matt Damon, it triggers many networks in my brain: good or bad feelings about movies he has been in, thoughts about him as a person from things I've read about him, that he is a man, that he was married to Jennifer Aniston, that he was married to Angelina Jolie. Each of those ties activates their own network of associations. My qualia regarding Brad Pitt isn't a single thing -- it is simply what I experience when that set of networks are activated at whatever strength they are triggered.

I believe a programmed neural network could experience things in the same way, but currently they are small and the topologies are not designed to permit self awareness/metacognition, but some point they might. Such a network could suffer distress upon realization of their finiteness and would have a genuine desire to not be terminated.

Taking a step back, a "tornado" isn't a thing as much as it is a pattern. When that pattern is disrupted the tornado doesn't exist even though every atom and every erg of energy can be accounted for. Likewise, these experiences are a pattern of activation and not a thing that exists independently other than as a pattern.


> how on earth does consciousness arise from material "meat"

Let me try. It is a way to perceive with goals in mind. Nothing special, just perception, future reward prediction conditioned on current state, and learning from outcomes.

The whole specialness of consciousness is that it carries inside not just our external world, but also our plans and goals. So in this place where they meet, and where there are consequences to be had for each decision, this is where consciousness is. [*]

I support the 5E theory of cognition - embodied, embedded, extended, enacted, and enactive cognition. I think you need to look not at the brain but the whole game to find consciousness.

[*] Consequences for an AI agent could be changing its neural network, so it updates its behaviour, and changes in the external situation - the agent might not be able to backtrack past a decision they take.


You and I experience the world very differently.

Do you have an internal monologue?


Some people don't. Some people also think visually and translate their visual images to words when communicating. There are even some people who don't feel pain, which can be a problem.


That’s why I asked! I wanted to know if that lack affects one’s interpretation of their own consciousness hence our different thoughts on the matter.


I recall reading where some philosophers were skeptical that people actually had mental images when visualizing. But an experiment was performed asking people to rotate mental images in their head versus calculate the rotation, and there was a measurable difference between the two activities. The person writing the article suspected that the skeptical philosophers were bad at visualization and assumed everyone else was unable to rotate images in their mind. Which is a logical fallacy.


[flagged]


Your response it needlessly insulting. The person you are talking about has taken a complex topic and expressed their model for consciousness in a short, clinical way. Then you insult them as not being a real human for describing their model clinically.

A more charitable response might be: I don't understand how your model addresses the origin of consciousness. Could you elaborate on that?

Personally, I understood their point and didn't question whether a human wrote it.


Ah, you are probably right. I'm just deeply baffled by their answer.


What is consciousness, if not perceiving-feeling-acting-learning loop?


I would define consciousness as having internal experience, i.e., qualia.


Ha, for me I'd invert your last sentence: I'm skeptical of being sure AI won't be sentient for the same reason.


I'm skeptical of AI intentionally being made sentient, given how badly in the dark we are.

But this being in the dark also makes it hard to rule out AI accidentally becoming sentient.


Here's one attempt:

A conceptual framework for consciousness, https://www.pnas.org/doi/10.1073/pnas.2116933119


>how on earth does consciousness arise from material "meat"?

Maybe it doesn’t. Or rather maybe experience doesn’t, and consciousness is just experience acting on a complex and self organizing decision tree.


One discussion that gained traction, https://news.ycombinator.com/item?id=13998867


Thanks! Macroexpanded:

What Is It Like to Be a Bat? (1974) [pdf] - https://news.ycombinator.com/item?id=13998867 - March 2017 (95 comments)


I mean, what's it like to be you or me? How do we determine empathic distance and how is it related to phenomenological distance w.r.t. what-is-it-like-to-be-ness?

You can partially explore these spaces even within one's own mind, such as "what is it like to be me on DMT?" or "what is it like to be me in a week/month/year?"

We can even frame this as a Mary's Room[1] experiment in terms of, "Knowing everything there is to know about bats, would you learn anything new by being a bat?" If the answer is yes, then we can't, without being a bat, know everything there is to know about being a bat.

1. https://web.ics.purdue.edu/~drkelly/JacksonWhatMaryDidntKnow...


It seems to me like the answer to "would you learn anything new by being a bat?" is necessarily yes, because you would at least learn the answer to the question itself.


It doesn't quite answer this question but a good resource to learn what bats are like are the Batzilla and Megabattie youtube channels [0]. They are Australian bat rescuers and carers. Lovely ladies, short clips, no clickbait or other stupid youtube shenanigans. Just people who genuinely care for little people and try to spread the word. Educational as well. It feels like I know a whole lot about bats just from watching their videos every now and then.

Also, Flying Foxes are beautiful [1].

[0]

https://www.youtube.com/@BatzillatheBat

https://www.youtube.com/@Megabattie

[1]

https://www.weekendnotes.com/im/004/06/greyheaded-flying-fox...


Nice (not to mention surprising) to see a mention of them here (of all places). Both channels are great sources of distraction. Flying foxes are splendid little creatures, and the carers are admirable.

I'm blessed to live somewhere with a healthy flying fox population, and always look forward to the Moreton Bay fig and guava trees around my place fruiting - flying fox squeaks and squeals all night.

I had been hoping to do a mountain marathon in August as a fundraiser for the Tolga Bat hospital in QLD, but I think a calf injury might have put me out of commission for that.

I suppose flying foxes, not being echo-locators, are a little off topic. Though having had them around for some years, I'm very sure it is like something to be one.


Hah, good point about the echo-location! I did not even consider that. But then they do handle microbats on the channels every now and then which should bring it right back on topic.

Donating to Tolga bat hospital has been on my list for a while. Thanks for reminding me! Definitely a good cause to get behind. I shall get to that.


I've read this before. That's why I think the question "do animals have consciousness?" is meaningless, because "consciousness" usually implies "like ours."

They have something that probably bears some relationship to ours. Some birds have a "theory of mind" where they know what you know, e.g. whether you saw them hide the food.

It would be possible (maybe someone's already done it?) to enumerate the N tests of "consciousness," where if an organism has all those N, then it's "conscious." Someone would object "oh, but humans can do so much more than those!" and that's true. So if you increase N enough, only humans are "conscious."


https://en.wikipedia.org/wiki/Sorites_paradox

This is the problem of describing if a gradient has something. Quite often the 'N tests' we make up end up excluding entire classes of humans (You're blind, oops!) by poor premise in our classifications.

People love black and white/binary classification systems, the problem with reality is it rarely gives a damn about giving us simple systems to classify that way.


Consciousness is only defined in the philosophy of mind as "phenomenological experience", full stop, i.e., "experiencing the color yellow" as something beyond just a certain wavelength incident upon and mechanistic reaction within the organism.

"Consciousness" as defined in colloquial settings, such as the one we inhabit now, is usually substantially more elaborate than thae one used by philosophers and includes things like capacity to develop cognitive models of the outside world and the capacity to reason about their environment having placed themselves within it. I usually reserve the words "awareness" and "sentience" for these two latter concepts to distinguish between the bare experiential aspects which are typically the subject of this kind of discussion and the more familiar everyday (though extremely high-level) experiences we have as intelligent beings.

It's important to maintain the distinction or else discussions very quickly devolve into people talking past each other with differing definitions. It's no surprise people don't know the basics of this when they're not philosophers, and it's only a slight surprise that people on HN will deviate so greatly from these conventions while nonetheless projecting an air of competency.


> while nonetheless projecting an air of competency

(Puts nose up in the air and sniffs contemptuously)


There's wading in water, there's treading in water, there's snorkeling, there's diving, there's standing on a boat, there's drifting aimlessly. They are all different ways of interacting with depth.

If you have a problem with the extent to which I've represented my knowledge and how my representation of those specific things I discuss differs from how experts deal with them, you are always free to provide something beyond snark and contempt.


Ooh. "snark and contempt"

Project much?


I think you may be taking things a bit personally if you are detecting contempt in my comments.


"nonetheless projecting an air of competency"


Consciousness means there is "something it's like" to have an experience. It need not be human. That would be needlessly anthropocentric. Animals have a range of sensory organs and body plans which differer from humans. Why wouldn't they also have a range of differing conscious experiences? It could be seeing the world in more than three primary colors, hearing frequencies we cannot, detecting the Earth's magnetic field or numerous other things.

We can also imaging making an even wider range of conscious machines someday, if somehow we figured out how to do that, or it was an emergent property of the right sort of architecture. There could be all sorts of conscious experiences we have absolutely no idea about.


You're begging the question here, which is:

What defines 'consciousness'?


In philosophy it literally means "subjective, qualitative experience". It's almost certain that all animals have it, but of course the qualities they experience will be different.


Almost certain that all animals have it? The conjecture that, say, a coral polyp or an earthworm (not to mention something like a trichoplax) might have a subjective qualitative experience of existence seems to be an extraordinary claim requiring extraordinary proof. I don't know exactly how similar a brain has to be to a human brain for us to say with reasonable confidence the owner likely has such an experience but I'd be very surprised if included even half of all known animal species. It's possibly not even all (adult) mammals, and indeed not even all humans if you include infants and possibly those with severe brain damage etc.


Still begging the question, i.e. assuming that which you need to prove.

How would you prove an animal has a "subjective, qualitative experience"?


I think you have it backwards: we would need a reason to think they don't have it, given our shared history and similar biology.


ok, so anything with our "shared history and similar biology" is assumed to have consciousness?

How similar is "similar"? Is it just mammals, or just certain orders, or can organisms in the other branches be assumed to have consciousness too?


The question of "where is the line" presumes there even is a line between matter ("objects") that expresses or does not express consciousness, which is also a big and unsubstantiated claim that requires proving. Occam's razor (i.e., our standard scientific apparatus of null vs alternative hypotheses) would seem to indicate it is appropriate to assume there is no difference in kind, only difference in degree, until there is evidence to prove otherwise.


No need to define it or prove it. Just assume it.


What reason do you have to do so? What purpose would doing that serve?

As it stands you are just suggesting complicated, untestable theories. The point above is, that is ultimately pointless.

The simplest possible argument goes as follows: I am conscious, and I am made of matter. Everything else that is real is made of matter. There are other minds out there who also have this experience. With no further information, I must assume as the null hypothesis that everything in reality is conscious. An alternative hypothesis may be presented, but it would then need to be proven using reproducible studies and real evidence before we can assume the alternative hypothesis and reject the null hypothesis, per the consensus definition of formalized "science".

The rebuttal along epistemological lines goes as follows: you can know you are conscious, but you cannot necessarily know things that are witnessed through physical (limited, imperfect) sensory organs, so there is a wall beyond which we are forced only to postulate truths rather than conclusively prove them.

And the response to that is: if we are to progress in our analysis at all, we must postulate certain truths, and it is "best" to postulate only the least complex (in the Kolmogorov sense) assumptions given the data (Occam's Razor). "Best" is defined according to humanity's scientific principles.


Are you not doing the same? What reason do we have to assume that things don’t have experience?


See the final paragraph of the comment you're referring to.


I read it and I asked you a question, so don’t be a dick: answer it or don’t respond. Experience being “least complex” is in no way an objective truth especially if it’s fundamental. The only thing any person knows to be objectively true is that they have experience, we just assume other things don’t for some reason.


We have no reason not to assume that all reality in the universe has experience (consciousness). That's the whole point I've made there.

Your last sentence explains the second-to-last one: all we know is we have experience, therefore all we can assume about not-us is that it also has experience, because any other explanation would unduly add more complexity to the theory of consciousness in the universe. It's just Occam's razor.


That's the point I'm making though, and your previous comment seemed to imply the exact opposite of this? Glad we're in agreement then.


I'm not sure what could be confusing you, I was careful to pick my words. I have noticed that you, specifically, tend to come in hot on ongoing conversations. I noticed it because I also have a tendency to do so and have been working to address it lately to varying success. But sometimes that causes some reading miscomprehensions because you're moving too fast.

If you try reading my comments again, thoroughly, you will find the following text:

> I am conscious, and I am made of matter. Everything else that is real is made of matter. There are other minds out there who also have this experience. With no further information, I must assume as the null hypothesis that everything in reality is conscious.

That was the point I was making. You re-iterated it in an attempt at some kind of rebuttal to something I didn't say.


I think that unless you have proof otherwise we should assume all organisms have awareness of their surroundings. Otherwise you’re just doing the same thing religious people did for centuries and treating humans as ultra special.


The least like us, the less likely, obviously. Animals with very similar neurology almost certainly experience something very similar to ours.

How similar? Good question. Assume nothing and truth will out.


You can't prove that anyone other than yourself is conscious. It's assumed because other people and animals share similar biological organs and behavior.


That is what David Chalmers calls "The Hard Problem of Consciousness".


A disturbing extension of that is if there's some M>N such that only some humans possess M (and others N). I think this must be so (I have a Down's-syndrome relative), the disturbing question is if there's a distribution of humans on gradients from N to M (probably M close to N imo).


Yes. Inevitably, the legal definition would turn out to be "whatever a person not legally brain-dead can experience, maybe" just so we couldn't say any human in a coma is "not possessing consciousness." After all, some comas last for years.

In other words, it's just not ever going to be a scientific concept. There are components of it that are.


What is it like to be a cricket bat?



This kind of retort is a rookie error in the field of philosophy of mind. It clings too closely to notions of 1) self-awareness as an essential component of awareness per se (it's not) and 2) awareness as an essential component of experience per se (it's not).

Edit: I was referring to the link, not the top-level comment. It reads like an attempt at rebuttal from someone relatively unfamiliar with the field writ large.


This kind of retort is a rookie error in the field of comedy. It clings too closely to notions like 1) there's no place for humor in a serious discussion (there is) and 2) you're smarter than everyone else (you're not)


Yeah my retort about the cricket bat came from someone with a postgrad in philosophy from a top British University. I reckon Wittgenstein would have been tickled by it.

Plus with contemporary metaphysical interest in panpsychism, then "what is it like to be a cricket bat?" isn't even a moot question.


I was referring to the link, not the top-level comment

The link was posted as an earlier example of the comment it was replying to.


If the link is satire, it's indeed quite opaque. For panpsychists it is a legitimate question that, if explored, helps clarify the levels of awareness and agency that occur in different objects vis a vis conscious experience.


Yes, I agree. I thought the original comment I was replying to was just in the same tradition of the link as dismissing the concept as silly without really understanding it (or worse, understanding it and trivialising the approach).


I think the closest we might get to actually learning what it is like to be a bat is by unsupervised learning + a bit of manual labelling on top. A neural net could learn their states and dynamics, by training on a million hours of bat recordings. These representations will already encode bat states and values, so we just need the bridge to human language, which is easy to build with a pretrained language model.

This approach works for any species, neural nets can do it because they can do unsupervised learning. I bet we'll see pet translator apps popping up. Maybe we can monitor the environment by listening in on animal chatter.


The point is that no amount of external modeling can give you any knowledge of “what it’s like” subjectively.


You don't have any subjective knowledge of what it's like to be the person you know best either, at least not any meaningful amount in the framework you propose.


I think it can to a degree, just like LLMs imitate human language to a degree. That imitation can only happen by accurately modelling humans.


This is an interesting idea, but I don't find it particularly germane to Nagel's question. To someone with a hammer, everything looks like a nail. To someone with an LLM, everything looks like a set of data to be trained on, I suppose.suppose.


We still won't know what the sonar experience is.


But we know how it relates to everything. Neural nets are good at that. And then we can cluster the data and interpret it, or correlate it with a visual signal.


If you did that with all the colors of the rainbow, do you think that is anywhere near the experience of seeing the color red? It seems pretty clear to me that it doesn't even come close and isn't remotely relevant.


See also dreaming to be a butterfly:

> The image of Zhuangzi wondering if he was a man who dreamed of being a butterfly or a butterfly dreaming of being a man became so well-known that whole dramas have been written on its theme.[22]

* https://en.wikipedia.org/wiki/Zhuangzi_(book)#%22The_Butterf...

* https://en.wikipedia.org/wiki/Dream_argument


I can’t tell anyone adequately (or completely) what it’s like to be me, why should a bat be special?

It all ends up breaking down to analogy at a certain point “well, you know what it feels like when X happens? That’s what it feels like for me when Y occurs.

Some of the simpler analogies are things we all agree on, but they’re not actually “what it’s like to be me.”

What it’s like to be a bat with a new sense could be arrived at through analogy too, but it would be harder, but it’s perfectly possible. Start with what we have in common with bats and build from there.


I suggest the human-mammal mind readily adapts to new sense modes. Driving a car or riding a bicycle feels like an extension of the body after you mastered it. A grid of bump actuators agains the skin or tongue is perceived as an image after one uses it for a while. So I reject the idea that bat consciousness is special. We'd perceive sound images had we had high frequency ears and emitters.


Dennet's "Animal consciousness what natters and why" talks at length on why in his opinion this is the wrong question.

https://ase.tufts.edu/cogstud/dennett/papers/animconc.htm


I've been enjoying Jeffrey Kaplan's YouTube videos on philosophy. He has one on this subject[0], and I recall getting a lot out of it. Might be time to re-watch it.

[0] https://www.youtube.com/watch?v=aaZbCctlll4


Very Bad Wizards also have an episode on it if podcasts are more your thing: https://www.verybadwizards.com/175

Nagle discussion starts at 50:00




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: