Hacker News new | past | comments | ask | show | jobs | submit login

This is a great essay, and I would highly recommend this as a counterpoint to Nagel in terms of explaining consciousness ("qualia") if you want to get the most out of Nagel's argument: http://www.newyorker.com/magazine/2017/03/27/daniel-dennetts...

If Nagel things materialists can't explain consciousness Dennett thinks they can. E.g.

"The obvious answer to the question of whether animals have selves is that they sort of have them. [Dennett] loves the phrase 'sort of.' Picture the brain, he often says, as a collection of subsystems that 'sort of' know, think, decide, and feel. These layers build up, incrementally, to the real thing. Animals have fewer mental layers than people—in particular, they lack language, which Dennett believes endows human mental life with its complexity and texture—but this doesn’t make them zombies. It just means that they 'sort of' have consciousness, as measured by human standards." Joshua Rothman, New Yorker, MARCH 27, 2017 - http://www.newyorker.com/magazine/2017/03/27/daniel-dennetts...

More detailed counterargument by Dennett: https://www.amazon.com/DARWINS-DANGEROUS-IDEA-EVOLUTION-MEAN...




If Nagel things materialists can't explain consciousness [...]

I do not think he makes this statement. He seems perfectly open to the possibility of explaining subjective experiences in physical terms but he is convinced that we are at least very far away from being able to do it. In consequence he obviously considers all current attempts flawed and lacking.


I appreciate the pushback. Here is what Nagel says in his essay (emphasis mine). I wonder if you still disagree?

"It is impossible to exclude the phenomenological features of experience from a reduction in the same way that one excludes the phenomenal features of an ordinary substance from a physical or chemical reduction of it--namely, by explaining them as effects on the minds of human observers. If physicalism is to be defended, the phenomenological features must themselves be given a physical account. But when we examine their subjective character it seems that such a result is impossible. The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view."


Let me counter with this quote.

If we acknowledge that a physical theory of mind must account for the subjective character of experience, we must admit that no presently available conception gives us a clue how this could be done. The problem is unique. If mental processes are indeed physical processes, then there is something it is like, intrinsically, to undergo certain physical processes. What it is for such a thing to be the case remains a mystery.

I think the critical point in your quote is the last sentence.

The reason is that every subjective phenomenon is essentially connected with a single point of view, and it seems inevitable that an objective, physical theory will abandon that point of view.

Sure, my experience of looking at a red object is fundamentally my experience, but there seems to be no obvious reason why we could not abstract me away and talk about the experience of an arbitrary human seeing a red object. This is also in line with the suggestions at the very end, trying to develop the tools to talk about experiences in an objective manner.


I don't think I fully understand what Nagel means by developing objective tools to describe experiences, I should re-read the full article a few more times. I think, though, that you're exactly right: We _do_ have a shared understanding and experience of what its like to see a red ball, and we can talk about it in the abstract. My reading of that is that because we do have a shared human experience there can and indeed must be a physical account (Nagel's term) of it. Nagel's argument brings in a wedge of doubt about whether that shared experience itself is real by saying, at least partially, that because we can't explain what it is like to be a bat (bat point of view) in human terms we won't be able explain what consciousness is for humans _or_ bats with a physical account (at all or at least right now). Dennett, I think, says that that is more or less backward, and that all the means is there are different kinds of consciousness. In the article I mention he says,

"The big mistake we’re making,” [Dennett] said, “is taking our congenial, shared understanding of what it’s like to be us, which we learn from novels and plays and talking to each other, and then applying it back down the animal kingdom. Wittgenstein”—he deepened his voice—“famously wrote, ‘If a lion could talk, we couldn’t understand him.’ But no! If a lion could talk, we’d understand him just fine. He just wouldn’t help us understand anything about lions.”

“Because he wouldn’t be a lion,” another researcher said.

“Right,” Dennett replied. “He would be so different from regular lions that he wouldn’t tell us what it’s like to be a lion. I think we should just get used to the fact that the human concepts we apply so comfortably in our everyday lives apply only sort of to animals.” He concluded, “The notorious zombie problem is just a philosopher’s fantasy. It’s not anything that we have to take seriously.”

I found that convincing, I'm curious if you do as well?


I agree insofar as that being a lion or a bat is probably really different from being a human. It is a pity that I can not remember what it felt like when I was only one or two years old, that might provide a glimpse at the difference. In all the the time I can remember I was not fundamentally different from now, or at least that is how I remember it. I would really like to know how it was at a very young age, not speaking and understanding a language, not recognizing me in a mirror, maybe not even being aware of my existence.

Anyhow. I did not read the entire Dennett article when it was posted here a few days ago, maybe I should, but it was just not compelling to me [1], at least as far as I got. What I got from the part I read is that he seems to do exactly what Nagel warns of, dismissing the experience of being a human. I find the comparison with a computer much more interesting than the comparison with animals. What if we build an artificial neural network resembling a human brain? If that is not good enough, what if we perform a molecular simulation of a brain? Or even a quantum physical simulation of a brain if molecules are still not good enough, but personally I doubt that.

But what if? Does this artificial brain experience what it is like to be a human? As a pysicalist I think the answer is yes. But just as Nagel says, I have no idea how this could possibly work, how the transistors in my computer could go from controlling the flow of electrons by mindlessly following physical laws to being aware of their existence in a universe, seeing red, feeling joy and pain. What if I replaced the computer with a mechanical one made out of billions and billions of cogwheels? With stones on a beach simulating a Turing machine? With a gigantic printed look-up table mapping all possible inputs to their outputs?

I can not think of any good reason why the stones on the beach - together with someone or something moving them around to perform the computation - should be any less conscious than the human brain they are simulating. And this seems of course absurd. Thinking about this is what gets me the closest to becoming a dualist or something like that. There seems to be not even the tiniest bit of hope at the horizon to even be able to attack this problem from a physicalist perspective. So when Dennett says that there is no problem, assuming he actually says this, then I must disagree.

[1] I had prior exposure to Dennett and, as far as I remember, quite liked what he had to say but somehow not this time. Maybe the topic was a different one, maybe it is just the way the article is written, maybe I should just read the entire thing.

P.S. I just did some more reading on Nagel, it seems you are at least more correct than me. He seems not as open to a physicalist account of consciousness as I thought but the details are hard to tell without actually reading more of his works.


The New Yorker article on Dennett is actually not up to the New Yorker's usual standards, but it gets at a few key points and punchlines. I linked to one of Dennett's books that I think every physicist would enjoy in my first comment.

I think two inverse arguments to the one you mention are more appealing - about getting close to a human brain with neural networks but not quite being there. First, and the New Yorker article actually mentions this at the end, but if you had a damaged human brain we could all clearly see that you are still human and conscious, just not exactly the way someone with a normal healthy brain is. Second, and this gets to not over-reducing the physical aspect of consciousness, say a Nobel prize winning physicist claims they have located physically in the brain where consciousness lives (I think this actually happened) you could quickly ask her, "so, if you take that puddle of neurons and other material where consciousness lives out of your brain and put it in a jar would you say "you" are now sitting in that jar experiencing what its like to be in a jar?" (also absurd).

As for rocks, that does sound absurd! :)

Really appreciate your thoughts.


> I can not think of any good reason why the stones on the beach - together with someone or something moving them around to perform the computation - should be any less conscious than the human brain they are simulating. And this seems of course absurd.

It will take around 3,000,000,000,000 years and enormous beach to simulate one second of brain activity. It is literally unimaginable. What do you imagine when you talk about absurdity? Is it some small scale model, which is laid bare before your mind's eye in all its simplicity, leaving no place for consciousness to hide?


I certainly see a beach with a few hundred or so stones when I close my eyes but I don't think that matters. It is simply the idea that stones on a beach - a lot of stones on a very long beach in a very specific arrangement relentless reordered by Tom Hanks for billions of billions of years according to a very long list of rules overseen by Wilson - could really feel joy and pain that seems absurd. I mean this is a common argument that it is just the sheer scale that would be required and that we are unable to imagine that leads us astray but in the end it just seems wrong that stones on a beach can feel pain, at least to me. But if you think about a computer simulating a brain at the level of neurons, something that is somewhat in reach, does this make it any easier? Does it sound so much less absurd that a data center packed with GPUs could really feel pain?


Stones don't communicate among themselves, and only change state under the entirely external effort of the entity that moves them. I would argue that the stones would need some mechanism for modifying their own state to even have a slim chance at consciousness. Seeing as stones are inatimate objects that can't possibly operate any mechanism, I think the idea is dead in the water.

Another way of looking at it: the significance of any particular arrangement (or sequence of arrangements) of the stones is only meaningful in the mind of the entity that is moving them around. Or perhaps any nearby viewers with the patience and far-fetched ability to make sense of the iterations of stone arrangements. The internal/external distinction between the stones themselves and the stone movers/viewers seems critical to me.

Software on the other hand... that is a bit harder to categorically dismiss. I think I can imagine software that produces an experience somewhat analogous to the human one.


You need of course something moving the stones and interpreting the arrangement but there is no need to bother Tom Hanks with that. Just throw in a Roomba pushing the stones around according to the state transition function. Make it a real quick one so that something meaningful can happen before the sun dies. And for good measure throw in a simple humanoid robot with sensors and actuators from wich the Roomba receives inputs to the computation and to which it sends control signals decoded from the arrangement of the stones.

Now that is not just a pile of stones, but nothing of the added things seems to add much complexity. A robot pushing stones according to predetermined rules can be very simple. Even simpler than a Roomba would be a gantry crane above the stones, it could essentially be just a few motors, a claw, and a switch to detect the presence or absence of a stone. I also just realized that the state transition function would not be an unimaginable monster with the possibility to hide something in there. You do not need much code to simulate a neural network regardless of its size and it would probably not grow that much when encoded for a Turing machine.

Now which part of the stones and the crane feels pain and anger if a loved one dies? And we are not looking for some stones signaling certain muscle activity or the production of tears, we are looking for the internal experience of pain. Based on my believes I seem to be forced to accept that those stones can somehow be conscious and feel emotions even if it seems hopeless to understand how this works. But this also has a possibly even more disturbing consequence. If piles of stones can be conscious, what prevents other objects from that? What about stars in galaxies? What does it feel like to be a galaxy?

Software on the other hand... that is a bit harder to categorically dismiss. I think I can imagine software that produces an experience somewhat analogous to the human one.

I can not, no matter how hard I try. I can imagine a software faking human experiences, to say it feels joy or pain, I can not imagine it to actually feel it. Not at last because I can not even really say what the difference is. It seems to me that once I could imagine this for a software it would only be a small step to imagine the same for a pile of stones. The difference between a human and some software seems enormously larger than the difference between some software and a pile of stones, at least to me.


If you compress all these trillions years back to one second, you'll be able to talk to the simulated person and relate to its feelings, stones will become a blur in the background and stop obscuring the view.

How such systems can have a feelings? I don't know. Probably the same way as a network of chemically/electrically activated neurons.


Dennett claims that he himself is unconscious. Not someone I'd look to for advice on anything, really.


Really? That would be very surprising to me. Where did Dennett say this?


He claims that consciousness is an illusion, ergo: He is unconscious.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: