Hacker News new | past | comments | ask | show | jobs | submit login

I am the OP, and you just about summed it up.

If I wish to ascertain the current status of the universe and the human ways of interacting with it, then any day, give me physics, math, brainscans, and controlled experimentation rather than longwinded twaddle from some dude believing his random personal ramplings to be somehow normative.




I prefer reinforcement learning to mind-body philosophy. I think RL has more clear and concise concepts that can be tested and has the potential to create intelligence, something philosophy can't do. I replace the word "consciousness" with "agent" and the word "emotion" with "value function", "sensing" with "neural network based representation" then suddenly so many mind-body debates become easier to grasp. "Reasoning" is just simulation of consequences in the mind, something called model-based RL, the babble of a baby looks like a LSTM learning to generate text, and the chaotic movements of a newborn look like pretraining RL agents for motion control.

Then I visit philosophy forums and people still insist on "qualia this" and "free will that". Qualia is just sensation + action value associated with it, it feels like something because it occupies our perception and value judgement, and determines actions. So many fake mysteries in philosophy, created by fluffy suitcase words like consciousness.

Yeah, but why would sensation and value function feel like anything? Why aren't we p-zombies one might ask? Because we are agents in the world and not being good agents means death, so we have to feel to exist. RL and sister domains like evolutionary algorithms explain consciousness by the opposite of it - death. For example, what is consciousness if not that thing necessary for you to eat, protect yourself and reproduce - in other words, to beat death?

P-zombies don't fear death. Even if one existed, it would not have the drive to protect itself and it would be damaged or destroyed soon enough. So it's like something that can only exist for a short time and not benefit from evolution or RL, similar to a traditional computer program. I wholeheartedly agree with a recent article that put the blame of stagnating philosophy of consciousness on the shoulders of Chalmers and the (philosophically) useless concepts he produced. The "hard problem" is just dualism in disguise. Call something "a hard problem" and it becomes a category of its own, apart from science, in the realm of metaphysics, an euphemism for dualism.


Replacing terms doesn't make the problem go away, nor does providing a possible evolutionary reason. You're advocating a behaviorist approach, which Chalmers and everyone else has been aware of for half a century or more.

And you misunderstand the p-zombie argument. It's functionally and behaviorally equivalent, because it's physically identical. Of course it would seek to avoid death as all life forms do.


The behaviorist approach is good enough to beat us at Go today. I'd say it's not the same as what philosophers rejected 50 years ago. On the other hand, the "hard problem" is just dualism in disguise. It's declaring something "hard", thus "special" and apart from the physical world that can be studied and understood.


> Why aren't we p-zombies one might ask? Because we are agents in the world and not being good agents means death, so we have to feel to exist.

> P-zombies don't fear death. Even if one existed, it would not have the drive to protect itself and it would be damaged or destroyed soon enough.

You missed the point, like many others who think they can 'explain away' consciousness in a 4-paragraph post on an internet forum.

What you described is a difference in external behavior (eating, death avoidance, reproduction, etc). That is not what the problem of consciousness is about, and it's a dead giveaway that you don't understand the issue you're pontificating on.


External and internal are united - if you fail externally, you're dead 'internally' too. You try to separate consciousness from the external world and study it under a microscope (metaphorically), and that's wrong. The agent is part of the world, it can never be anything outside, it can never be understood on its own. By labeling my argument as "behaviorism" and rejecting it because of it's "external" argument, then you ignore the very source of experiences that create consciousness. Then we hear people searching for the "neural correlates of consciousness" like it's a kind of brain secretion that just needs to be found - it's the external world that correlates with consciousness, when you're an AI agent or a brain. Every sensation, every reward and the body itself come from the world, the structure of experiences encountered in the external world creates the contents of consciousness, and yet we search for its explanation just in our brains.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: