What's an illusion? If by illusion you mean an abstraction, like how a TV picture is an illusion of a picture rather than actually being a picture, then I'm on board. If by illusion you mean "worth excluding from your map of how the world works," then I have a bone to pick with you.
My main problem with physicalism is that it doesn't handle abstraction well. I'm fine with monism over dualism but you need some kind of functionality with which to consider different kinds of 'stuff'. Otherwise a rock, Conway's Game of Life, and Lord of the Rings are all on the same plane of existence.
What draws me to Objective Idealism isn't so much the fact that it's compatible with religion but rather that 'mind stuff' is the best 'thing' that we can use to describe everything. The fact that it doesn't put severe emphasis on the physical as "better" than other modes is just a nice little bonus to annoy materialists with.
Yes! This bothered me as well, until I recently encountered Sean Carroll's philosophy of "Poetic Naturalism":
1. There are many ways of talking about the world.
2. All good ways of talking must be consistent with one another and with the world.
3. Our purposes in the moment determine the best way of talking.
One way of talking about the Game of Life simulation running in my other browser tab is as a bunch of electrons bouncing around in my computer's CPU. Another way of talking about it is as a cellular automaton obeying Conway's rules. And they're consistent with one another; e.g., if I stop the electrons by shutting down the computer, I expect the automaton to stop running.
In retrospect, it's pretty obvious. But it must not have been _too_ obvious, because it presents a viewpoint that isn't quite physicalism and isn't quite dualism, and people have been arguing back and forth about that for a long time.
Sean Carroll, The Big Picture
2 sounds dubious. There are good ways of talking about world that are incompatible between them (e.g. quantum mechanics and general relativity, but even more so, opposing viewpoints not based on objective disagreement but value judgements).
First, "ways of talking" have domains of applicability. GR's domain of applicability doesn't include the very small scale, so it doesn't make any predictions there to contradict qm.
The book also talks about value judgements, and is very explicit that different people's core values may differ from each other. So I guess poetic naturalism doesn't apply to them? That bit wasn't clear, but I would only apply these axioms to "is", not "ought" statements.
We simply have no model for consciousness. We have absolutely no idea what it is.
It isn't even worth getting started with dualism vs materialism when both are - ultimately - constructs created inside, and possibly by, the thing/experience/whatever we're trying to describe.
One problem I have with illusionism is that if consciousness is an illusion, what is it that is being fooled by the illusion? Presumably the answer is that the illusion is fooling itself, which to me implies that either there is something there that is "real" to believe the illusion, or that the definition of an illusion in this case is so far from our usual definition that the term does not have much in the way of explanatory power.
The brain is a super information processing machine. It executes incredibly and unimaginably complex algorithms without our knowing about it. But one of the things it does is to construct simplified models of itself, and these simplified models are what we are "aware of." More precisely, these simplified models are what consciousness is.
The "illusion" is that consciousness is somehow in the driver's seat. It isn't at all. It is just a kind of post-hoc shorthand that the brain uses for itself. It's more like a log file than an executed piece of code.
My typing out these words didn't come from my "consciousness," they came from the brain's incredibly complex algorithms. But my consciousness is taking the credit; it gives the abstract, executive summary version of how my goal of expressing some point led to my word selection. All those "inner experiences" along the way are essentially notational.
The source of meaning is not the brain itself, but the game. The game between agent and environment, where the agent collects rewards and tries to reduce losses. All meanings stem from the game itself, meaning is not 'secreted' by brains in a jar.
The illusion objection is just an observation that any model, no matter how complex it is, is an approximation. The "ego" itself is a model of the agent, an approximation of the agent. But a useful one, a grounded approximation, that tracks reality as it changes. At the very least the model is good enough to make sure genes get into the next generation, otherwise there would be no more model to talk about. That's the source of meaning and consciousness - it's a loop, a bootstrapped meaning based on self replication.
I often see debates about consciousness ignoring the environment and the game (or life). That's a sad situation for philosophy, in my opinion. They got scared of behaviorism decades ago, and now ignore reinforcement learning. But RL is our best bet at cracking the mind-body problem.
I'd suggest it's impossible to be conscious without being conscious of something, whether that something is an external event, a physical sensation, an emotional state, a desire, a memory, a plan, or an abstract thought.
In practice consciousness flits between all of these things, like an inner cursor.
But that does nothing to explain the sensation of consciousness - the simultaneous experience of being both subject and object.
It's the gap between "Maybe this very metaphorical explanation helps" and the lived sensation that appears to be unbridgeable, and makes the hard problem so hard.
Qualia is an embedding of sensations into a "sensation space". Being aware of qualia is just evaluating sensations with report to future actions, goals and desires - basically adding emotion on top of perception. Emotion emerges from the utility of the current state of your experience, utility is related to rewards, rewards are controlled by genetics, and genetics have a single goal - replication. That's how qualia bootstraps into reality - it supports actions that protect self replication of genes, and genes support the development of a brain that can have qualia in the first place.
Chasing the Rainbow: The Non-conscious Nature of Being
I think and theorize on consciousness quite often, but this angle was new to me and kinda blew my mind. Thanks for sharing :)
An agent needs to model its environment in order to plan successful strategies. But when the environment contains other agents, it becomes necessary to model them too - thus, create representations that can predict future actions of those agents. When applied on the agent itself, these models create the "ego", a representation useful in predicting the agent's own future actions. All this is necessary in order to maximise rewards in the game (and by game, I mean life, for humans, and the task at hand for artificial agents).
If false==true, what is true?
Not at all, they are more like opposites of each other. Dualism in this context refers to the idea that there is something non-physical in addition to the physical body and brain that makes you conscious, a soul or something along that line. If you think that dualism is wrong and you are just a bunch of atoms governed by the laws of physics, then you probably think determinism is true, i.e. the choices people make are not really free choices but just the results of the initial conditions and the laws of physics (which may actually be non-deterministic, for example if the randomness of quantum physics is real, which makes the naming potentially a bit confusing).
It's hard to walk away from this determinism since we apply it to the rest of the world so often. It seems really suspicious to not also apply it to ourselves.
I’m also realizing based on a couple of responses that my thoughts on dualism probably fall out of what the majority of people consider dualism to mean. I.e. not just limited to the mind body problem.
Thanks for the clarification :)
That's what I am - a self replicating, self adapting bunch of atoms that manages to keep being alive for a few decades. Not just any bunch of atoms. Any self replicating system has interesting properties not found in non-replicators.
Dualism AFAIK is believing that body and soul are two different entities. Atheism negates the soul so only the body exists, with the mind as a structure of brain activity.
Since the body is matter and subject to physics laws, some conclude that there can't be free will. But this is actually assuming a dualist vision, as if we existed outside the physical world.
In other words, causation easily flows into the brain, but not outward from it. Here's a link if you're curious: http://www.informationphilosopher.com/knowledge/emergent_det...
The brain has its own set of rules that it uses that is only partially attributable to the rest of the world around it. It can create it's own rules and operate 'under it's own steam', in the same way that North Korea manages to operate largely autonomously of the world around it. It's certainly constrained by geopolitics, but its internal behavior manages to remain free regardless.
I used to want to reach for quantum mechanics to explain the brain, now I'm happy with "plain old" electromagnetism.
Unity <-> Duality
or Unity <-> Seperation
or No-THING-ness <-> Things
or the whole <-> the parts
Lately I've actually came up with what I feel is an even more general pattern. And because it's 1am, I'll ramble a bit. :)
Typically we like to say things like:
Something <-> Nothing
True <-> False
I'm thinking a better pattern may be:
Something <-> Not Something
True <-> Not True
Which yields the pattern "A <-> Not A"
The next step is that Everything follows such a relationship, such that you could conclude the Absolute duality of "Absolute/All <-> Not Absolute/All"
An interpretation of this may be that: The universe must exist absolutely out of logical necessity as absolutely not existing requires the definition of absolute existence.
This means past, present, future, all states, all things, all possibilities, exist. There was no beginning and end everything just "is".
This leaves many problems and many things to resolve such as, "if the universe just "is", how is it that I feel the sense of happeningness?". But hey, "Happen <-> Not Happen". They both most be reconciled. A metaphor to reconcile this is to take the static state of all knowledge in your brain as the totality of your brain states. Dreaming, or viewing these different brain states in different orders yields visions where things are happening, even though at rest each piece of knowledge/memory are just static pieces of information.
There are many other problems to reconcile, but this is my current line of thought. After spending my whole life fighting battles that emerge with hierarchical, cause-effect logic, I had to go off-roading a bit.
Of course, now matter how far we take it. You can with a bit of over simplification, sum your brain up to neurons being "connected or not connected" as the base of everything you know. Which means the limit of the description of the logic of the universe is comically limited based on the structure of your brain. I.e. It seems that we can never self-confirm beyond this play on words. I call this the cosmic joke.
Good night. Please forgive my poorly formed, probably not scientific, and brief midnight thoughts :)
Maybe the link between physicality and information is consciousness. Only consciousness provides a single player universe -- and thus provides the mechanism for axiom of choice..grabbing the totality of experience as a piece of data. Without internal experience, we must resort to AC and somehow look at a living creatures' external experience in order to derive the real number their life computes...nonsense..it is not sound. So consciousness really does serve as some sort of bridge. The totality of all consciousness provides the same complexity as the totality of the 3rd person objective universe from start to finish. We have two data sets that encode the same thing. Linked by consciousness. What is the rub?
I remember one particular thought of how "real" the mind space actually is. We tend to think of the perceived reality as very physical, when in fact the mind space is able to simulate the same feeling of physicality. It really makes one question if the difference between mind and physicality is even worth distinguishing.
I often describe consciousness/me/you/entities/etc as dynamic lenses of the universe. Something (like information) goes in from all directions and the lens transform that into reaction. I think you could actually extend that view to entities like atoms. Hm, every point is a lens in itself?
The failure many see in physicalism is that the physical sciences can't (yet, if ever) tell us how there's a qualitative nature to our experience — that it feels like something to experience at all. How the eye turns photons into nerve signals, and increasing amounts about how the brain processes those signals into content, sure. What it's like to experience color, or even how there is such a thing as experiencing color and not merely electrochemical signals traversing myelinated tubes, not so much.
Consider, however: all of the content of your experience could somehow be false (think of how true and vivid a dream can feel, for example), but the simple, content-irrespective fact that you are having an experience can't be faked, can it? Wouldn't you still have an experience of having a faked experience? This is variously called an illusion, an emergent phenomenon, the "hard problem of consciousness", and many other things, with varying degrees of politeness.
Objective Idealism posits instead that this very experiential-ness itself is the fundamental "stuff" of reality, and that all of the content of what you experience, — specifically including the experience you have of being an individual, distinct "you", and even the phenomena that we call "matter" and "energy", themselves — somehow supervenes on it.
For now, the question isn't falsifiable, so it's often also dismissed as meaningless — not least by people who subscribe to physicalism and don't want to play any more, when this line of inquiry comes up...
If we reframe to say that the atoms and forces of the chair are made of the same stuff as our thoughts, then that's way more in-line with how humans actually do things.
How do you distinguish if you are making people mad because you are making bad arguments vs. making people mad because you are piercing the veil of their tribal biases?
(1) - (And therefore, any mental states which people have experienced resulting from the "god-shaped hole" are clearly within the range of normal behavior in the evolutionary heritage of Homo sapiens. Therefore, the feelings associated with religious faith and hope are merely the birthright of human beings, whether or not one believes in a particular deity or is a materialist. I cite myself as a datum.)
"God-shaped hole" is a bit silly, the primary purpose of spirituality is to allay fears of uncertainty. Materialism allays those fears the same way that belief does. Any position on that spectrum is as valid as any other.
Spirituality, the reckoning with of the profoundly unknowable, is equally doable by both materialists and by believers, and I often see people who proclaim themselves as uber-materialist and hyper-critical of religious idiocy, go whole-hog with their own forms of myth, legend, priesthood, and dogma.
If materialists read the Bible with the same reverence that they read Shakespeare with, they'd see why their positions are so silly. David and Goliath is a phenomenal read on the dynamics of ego and courage. If Shakespeare had been born in that day, he'd have been on the committee writing the Bible.
It's just that the Bible is so old that you need to actually understand history to really grasp it. Of course, there's an entire industry of people devoted to making it accessible. Religion deals with far, far, far more than just the stuff of belief.
My position on materialism vs everything else is that since all positions on the spectrum are equally likely to be true, and there are far more positions on the spectrum that aren't materialism than that are, then materialism is vanishingly unlikely. Reductionist scientific reasoning is a poor tool to parse the metaphysical landscape with. More realms than just the physical can be conceived of without even having to consider the supernatural. These realms need to be unified somehow if we want to avoid the illogic of dualism.
Sorry, I agree with much of what you write above, but this in particular strikes me as a non-sequitur. There is a nihilism which can come from the certainty that a human is but an insignificant blip and from the certainty that the sun will burn the Earth into a dried out rock, if not de-orbit the planet completely. The "God shaped hole" is often deemed silly by the materialist tribes. Why, in particular? I think it's because it's so often used in silly bumper-sticker rhetoric. The existence of such needs -- which can go beyond even the question of life, death, and a (fictional) life after death -- is something which human beings, materialist or otherwise, should acknowledge and examine, or ignore at their peril.
Religion deals with far, far, far more than just the stuff of belief.
Indeed. And so does the "God-shaped hole." I think belief is where many of both the religious and the materialists get themselves stuck, unproductively. I'm confident that there is no afterlife. However, given the arc of history, and the utter unpredictability of today's world to our ancestors of a thousand years ago, even someone who understands Thermodynamics has reason for unspecified hope.
But it doesn't take much to reach a conceptualization of God. I believe in a universe that's bigger than the physical aspects of it that we can see and experience. Given that even in our own universe, self-organization and sentience were able to evolve, there's no reason not to believe that sometime before our universe existed, a powerful being evolved to guide existence. When time frames literally stretch out to infinity, anything's possible.
Our universe could be just one in a countless procession of similar universes. The idea's even taken seriously by the scientific establishment. With this groundwork laid, over unimaginable numbers of universal iterations, there's more than enough room for God.
Here's him explaining why the problem is hard and how it could be approached, in the middle of some kind of artifical jungle: https://youtu.be/Vl8J3K_ZLkg?t=5m50s
It is clear to me that, whatever it is we're talking about when we're talking about consciousness, an expander graph doesn't have it.
On the upside, it explains why the cerebellum, despite comprising half the neurons of the brain, has virtually no impact on awareness when removed (like for tumors or epilepsy). The IIT answer is that the cerebellum is highly regular, like a GPU having many units, but all doing the same thing. In this sense, it has lower phi than the cerebrum, which is way more heterogeneously organized. This might also explain why awareness is lost in deep sleep or epileptic seizures; the theory is that the electrical pattern becomes much simpler, and lower phi.
The downside is that it's not clear where the dividing line between conscious/unconscious should be. A planarian only has ~8k neurons; is its phi sufficient for consciousness, or is it a biological robot? Or put it the other way: the phi of things like the internet or a biosphere could be quite high, but are they conscious?
As my advisor liked to joke, "What's the phi of the population of China?"
Small, because if you cut it in 100, you still get 100 functioning parts. Can't cut the brain in 100 and still get functioning mini-brains.
Don't get me wrong, IIT is one of the best mathematical models of consciousness out there, but I don't think it's the final word in the matter.
Isn't the cortex also the same unit (the cortical column) repeated over and over again?
After I understood the RL paradigm, I realise that Tononi's explanation barely scratches the surface. Yes, there is integrated information, but how does it come about? What is its purpose?
The answer is simple - painfully simple - the goal is to maximise rewards. One goal we all have is to live and have children - and this root goal (a necessity of the genes to propagate, actually) is what guides the evolution of integrated information in the brain. But the environment plays a crucial part in the contents, structure and complexity of consciousness. Integrated information is very dependant on the environment. Yet Tononi & co. still search for it in the brain, as if you can speak of a brain without considering its experiences, and consider experiences without thinking about the world and the problems the agent has to solve.
Just watching reinforcement learning agents learn and evolve in simulated environments, as we had the opportunity for the last 3-4 years, is enough to create a perspective about agents that is not human centric and that is very useful in thinking more clearly about consciousness. You can see a humanoid learn a gait that is like the Ministry of Silly Walks , you can see bots playing FPS games, AlphaGo playing against itself, cars driving themselves... That puts human learning and human agenthood in perspective.
And Chalmers is a dualist that believes there are two realms that can't be explained, and that's ridiculous in this day and age. He's the worst philosopher of consciousness because he led a generation astray with sterile dualist concepts - and where has he led philosophy? Nowhere, there was no insight, no discovery after the "hard problem" because, darn, it's "hard" which is another word for dualism today.
I take Tononi and Dennett over Chalmers any day, but I prefer Reinforcement Learning over all of them as my intuition pump with regard to consciousness. Philosophy is mired in a swamp of bad concepts that are almost useless, they should just use a learning based terminology which is so much more effective. Engineers and experimental scientists create bots that beat humans at Go, a game that can't be brute forced, and they don't realise they've been outrun in their 2000 year marathon by a hundred year old concrete approach. The difference is that RL has the right concepts and philosophy uses extremely refined but ultimately useless concepts. They've realised words don't mean anything in the absolute sense (they all rely on each other, cyclic referential) and are just part of a game, but are still neck deep in useless words instead of using evolutionary and RL concepts to concretely model consciousness and the game it plays.
That's a full-on nihilistic postmodernism. The fact that words mean something only in reference to other words doesn't have to mean that they are useless. Quine and other pragmatists (Buddhism does the same) argued otherwise - that concepts/theories derive meaning or truth-value from how useful they are in the real world (as a network, rather than individually).
Treating all philosophers as a one camp vs science is mistaken. Whatever any particular scientist or engineer say, there always will be some philosophical assumptions behind it. It's always better to make them explicit rather than be in the dark. The best scientists in history were pretty deep in philosophy as well.
Eg. Tononi is both philosopher and a scientist. He's clearly on Chalmers' side philosophically, he perceives consciousness as something fundamental, MUCH more fundamental than learning. He posits that even stable systems (so with no learning at all) can be conscious. Which makes a lot of sense from phenomenological point of view.
He also adds a theory of how specifically consciousness may be related causally with the physical world. That's the scientific part.
Silvers, on the other hand, and the whole RL thing is not concerned with consciousness AT ALL! It's a completely different problem. Actually it may be the case that most of the learning processes in human mind are unconscious!
"There is no consciousness in itself, just consciousness of something. Learning is what ties together agent and environment..."
Exactly - if you define learning as a relationship between a system and its environment, you don't need anything else (like cosciousness), just the actual and potential interactions.
Late Wittgenstein, Heidegger, Merleau-Ponty and others would be on the same page with you here, so again, let's not throw the baby of philosophy out with the bathwater. These observations were made in the first half of XX century. They apply perfectly to the naivete of old school symbolic AI (and a logical positivism philosophical stance behind it), as captured by Hubert Dreyfuss, who described all the problems with it from philosophical (phenomenological specifically) standpoint in "What computers can't do" and his more recent paper ( http://cspeech.ucd.ie/Fred/docs/WhyHeideggerianAIFailed.pdf ). RL seems to be step in the right direction in this perspective. However...
"[Learning] ... it's the building force of consciousness."
Well, this part just doesn't make sense. You want to focus on explaining learning? Fine. Do some work on RL, it's enligthening for sure. I completely agree that it's fascinating how new concepts emerge in AlhaGo around some specific baord configurations. It changed people's understanding of the game. But please, don't conflate it with consciousness. And if you do, be open about it and name your position in terms of Chalmers' recent paper. Is it some form of illusionism? Only then we can have meaningful conversation about your actual position on what consciousness is.
Whatever is the relationship of concepts and sensations, however these two aggregates relate to each other and evolve in the mind, consciousness seems to be something more fundamental. Are you saying that AlphaGo is already conscious? If not, can it be made conscious? How? By adding more CPU? A webcam? We can't escape these questions.
Humans don't really have that much "reflection", in the sense that we use the term in programming. We can't see our library of reflexes. We can't see what early vision is doing. We can't look at the rationale behind our own classifiers. We can't look at how our memory is indexed. Trying to understand the mind by introspection is thus inherently futile.
How do we know we've created something with consciousness, and not just a very sophisticated program? The philosophy you disparage already has a term for this: the "philosophical zombie". For all intents and purposes, they appear human, but they have no internal experience whatsoever. All you'll have done is shunt the problem downstream.
Also, you're wrong about early vision. That's the best studied part of the brain, and in fact, researchers have applied ML techniques to fMRI data and extracted out the images being shown.
-- How do we know other humans aren't "very sophisticated programs"? How do we care? Which is to say the whole "omg they look and act the same but are so, sooo different" of philosophical zombies talk and similar, uh, rot, is kind of leveraging the person's intuitive attachment the idea of a soul, a magic thing with no measurable properties that makes us oh-so-different than a crude material object.
If you read the GP in context, it doesn't say neurologists haven't studied consciousness, it says people have no self-awareness of the preprocessing that goes into creating vision.
We don't. Most people assume that other people have consciousness, with various justifications, but no one can know that.
>a magic thing with no measurable properties that makes us oh-so-different than a crude material object.
But it does make a difference morally speaking. If we consider a Utilitarian model of morality, then what is good or bad is that which causes the experiences of happiness or suffering. But if a philosophical zombie does not experience pleasure or suffering, then there is no moral consequence to harming them, any more than there is moral consequence to watching someone die in a movie, or to splitting a rock in half.
And this becomes even more worrying when we start to think about AI. If an AI can develop consciousness as a result of our actions, then perhaps it can also experience suffering as a result of our actions.
Of course, since we cannot identify p-zombies, possibly by definition, one can argue that there's not much point thinking about it. But, well, you could say that about a lot of philosophy. Personally, I think it's valuable to think about these things as an intellectual exercise, even if we can't really conclude anything from it.
No, qualia/awareness is not magical, it's prima facie undeniable, we are aware of things, and we have an experience of the world. In fact, consciousness is more fundamental than everything else we know about the world, because all else we know is known by awareness. That scientific paper you read? You were aware of it. Bacteria under the microscope? You had an experience of seeing.
It's the very definition of the hard problem. How does biology lead to awareness? We have strong evidence they're related (sleep, scotomas, mental disease, drugs, etc etc etc), but we have yet to pinpoint the connection.
As for the philosophical zombie being "rot", consciousness neuroscientists have already debated if it were actually possible, and if so, detectable. We already know about a phenomenon called blindsight for vision, where you can perform visual tasks but have no experience of seeing; how do we know there's not some frontal lesion that could wipe out or reduces your awareness, but leave your other faculties mostly intact?
OK, "magic" is the adjective adjective added for flavor, "no measurable properties" is the key term. The most obvious point is the "philosophical zombie" which specifically posited to have no measurable differences from a non-Zombie.
It doesn't seem particularly strange that a "thinking being" would be receiving and processing data. Human language can used to reference internal experience that is assume to be common to all humans (as your construct above shows). But these references can't directly be to something "primal" since they're representations.
Essentially, this style of mysticism references internal experiences with individuals emotionally valuing their internal experiences and wanting them to be irreducible to just matter. And once the mystical concept goes out in the world, it wants to cloth itself in the appearance of science - first philosophical zombies are have no identity except "the same on the outside, identical on the inside" and then opportunists wanting to leverage this canard take out their microscopes and imagining measuring to regain some respectability. Yet, let us posit the unmeasurable and then try to measure it. I assume the church of the Giant Spaghetti Monster has a rite for doing just this.
So we'd be dealing with a situation where we have something that is behaving in a human like manner using mechanisms that are (to a first approximation) how humans do it.
Claiming that that program isn't conscious is close to defending the idea that the sun orbits the earth - the argument is 'even though we have accurate measurement everything and a well verified model, the implications are discomforting; hence the model must be wrong'.
There are people who won't handle the idea that consciousness is rooted in physical processes that we already have a handle on.
But what you're really asking is, would we experience mirth or humor if stimulated. And we know that's true, at least for memories. Certain hippocampal stimulations elicit associated memories.
The problem is not whether biology is related to consciousness (it is), but how?
Our ability to talk about consciousness doesn’t prove much either, because it turns out we can’t explain the idea in words. Trying to explain consciousness comes down to statements like: it’s the difference between seeing and !!!seeing!!! We can only communicate its true nature by way of alluding to the other’s experience of the same, not by direct explanation.
I don’t think it’s helpful to being morality into it. Depending on your moral views, there might be good reason to treat things that we think are conscious as if they are conscious. That’s arguably what we do with other people. It doesn’t speak to the underlying questions though.
While I’m suggesting above that conscious experience itself is probably irreducible and uncommunicable, that doesn’t mean that we can’t understand its causes and effects. It seems highly plausible that consciousness plays some functional role in the human brain, the understanding of which could be useful for medicine and AI.
Finally, I find plenty of utility in the joy and wonder of trying to understand this universe. The existence of consciousness as one kind of phenomenon or another has profound implications for our understanding of it, disproving some hypotheses and suggesting new ones.
Is illusion real because we experience it?
It seems highly plausible that consciousness plays some functional role in the human brain, the understanding of which could be useful for medicine and AI.
Evolution produces things which aren't "useful" all the time. Examples: That dimple above your upper lip. The blind spot. The human coccyx...
The existence of consciousness as one kind of phenomenon or another has profound implications for our understanding of it, disproving some hypotheses and suggesting new ones.
Consciousness is the experiencing.
If one deeply introspects about the nature of one's experience of experience, one may come to realize there's a certain fragmentary nature to consciousness.
Have you ever had episodes of behavior, for which you have no memory? "Blackouts?" I have. What if consciousness is simply an interesting epiphenomenon, unnecessary for thought and decision making? If all we have are people's reports of it, why should we accord any more epistemological significance to it than we do to religious experiences?
As for the illusion question, it’s missing the point.
It doesn’t make sense to call it an illusion. Your conscious experience may accurately reflect reality, but that doesn’t change the fact that you are experiencing the phenomenon of consciousness. That our consciousness is more fragmented than we at first think is similarly irrelevant to the hard problem; fragmented or not, the subjective experience lacks explanation.
It’s very well possible that consciousness is unnecessary for thought. I strongly doubt that it is totally useless altogether, but that’s certainly possible too.
Even if you believe it is probably useless, we investigate things of dubious utility all the time, often discovering unforeseen uses along the way.
So the question remains: why are you so eager to dismiss the defining feature of the human experience?
Am I? If consciousness were a phenomenon no one had a memory of, would we even know it to exist? We certainly wouldn't be having this discussion. I can imagine a consciousness without memory, but I can also imagine a fantasy unicorn in real life. This doesn't mean either exists.
As for the illusion question, it’s missing the point. P
I don't think so. I think it's very significant that there's little difference between the consciousness of real sensory information and consciousness of illusion.
So you have no proof for your assertion. Only a suspicion. Perhaps a well founded one, I could even grant that.
Absolutely. We have awareness in the current moment. Memory is not required. In fact, if you think about what memory is, it's a current, internal experience that we associate with a concept called the past. All memories are actually experienced in the present, making awareness more fundamental than memory.
Without memory, the contents of awareness would be very different, sure, but it's not a prerequisite for awareness itself.
Since when? (Not just a joke. Also a serious question.)
In fact, if you think about what memory is, it's a current, internal experience that we associate with a concept called the past.
Uh, no. I can remember what I had for dinner last night without having a flashback where I re-experience last night as in some kind of dream.
How do you know? There are people who can't move short term memories into long term storage, but they can remember enough to be able to play Tic-Tac-Toe, for example.
Do we have any verifiable examples of consciousness where there is absolutely no memory? I very much doubt it.
Just because a memory is not as vivid as sight or a dream, doesn't mean it's not currently in consciousness. Think about it, when you're remembering, you're aware of something: the memory. It could be a fact you're aware of, or a diminished sensory experience playing in the mind's eye, but you're still aware of it.
> How do you know? There are people who can't move short term memories into long term storage, but they can remember enough to be able to play Tic-Tac-Toe, for example.
That example is about memory and behavior, not awareness. A complete anterograde and retrograde amnesic would still have an experience of seeing (some odd shapes) and emotion (being confused, stressed, etc). And in fact, this is what we think the consciousness of pre-long-term-memory infants is. To quote William James: "blooming, buzzing confusion".
> Do we have any verifiable examples of consciousness where there is absolutely no memory? I very much doubt it.
That's true only because 99.99999999999999% of humans have no memory deficits. It's correlated but there's no causal relationship needed. The only thing an amnesic can't be aware of is a memory they can't access or that was never formed.
As a professional philosopher writing for other philosophers writings are very analytical and thorough, so reading and following them is hard work.
I think we should try to create intelligent AI agents in order to understand what consciousness is, and reconsider behaviourism and scientific approaches, as opposed to this kind of sterile dualism.
The hard problem of consciousness is being conscious, which involves a lot more than awareness of self-consciousness.
To me, I'm the only consciousness in sight. Everyone else is just objects. If I share an experience, I incorporate that as my own from my own perspective. There is little need to tell other people that they are conscious. This is rather a result of social dynamics, which I am alas not very conscious about.
The tipping point is if I perceive myself, my body, my actions (rather the results) as objects. That means I was not aware of those objects, didn't anticipate them, so I must have been rather unconscious. This is in a sense a loss of identity (in the sense of in+dent, interleaving, interlocking, etc.). The goal of consciousness is to redirect the subconsciousness, to avoid unconsciousness and uncertainty, simply because of pain and fear and painful fear, so there is an effective feedback loop. There is an incentive against active thought: it's a huge energetic burden. Obviously it has it's uses, but we can think passively, kinda -- e.g. to remember something long after having actively thought about it; or to meet a default decision to direct attention towards something. So there is a real incentive against consciousness. I can only guess, that opponents, are actually getting a headache of thinking on too many meta levels and energy restraints typically lead to anger, which might move them into a bad light, where it's hard to distinguish between hyperbole and hypothesis.
At that, there is no distinction "from the other problems of consciousness" insofar discussion is concerned, because talk is conscious. So Dennet pulls a short circuit and claims that all talk about consciousness is inherently conscious, whereas on the subconscious level actions speak louder than words and he provocatively takes the opportunity to verbalize such a subconscious action (refusal, opposition, repulsion, feeling insulted, fatigued or ...) -- still to further the discussion, out of a subconscious desire, professional or habitually.
This is really cute, because Dennet assumes the position of the antagonist, giving an adversarial example to learn from, implicating that not all of his arguments are wrong, probably hoping that the further discussion will converge to a language he can support. Ultimately, if he says that there is no problem, he implicitly admits that Chalmers already had solved it. And if he says that there is no distinction, he implies that the conscious can not be treated in isolation from the subconscious.
So he is naturally admitting, that his research is not perfect, ie. not done.
The problem, the working of the brain is certainly nonlinear so there can be no clear line as separation. It's a process, not a state, and so he keeps processing the problem, maintaining the illusion of progress.
Nihilism is the ground state of philosophy. He doesn't fall back on it, he was already there, and he shows the whole field that they haven't come very far, philosophically -- which is not bad, because constancy is the psychological end-goal. From the natural sciences, physics reinforces subjectivity as a necessary epistemology, while neuro-science is the subject moving the fastest, figuratively speaking.
I would usually not post such a long reply, but I feel your insult was out of line and the topic is generally interesting enough. Yeah, I'm rambling.
In all seriousness I wonder if Dennett hates consciousness (the “hard” kind) because it threatens his worldview. He seems like the sort of person who finds it impossible to say “I don’t know.”
I don't think consciousness is being invoked to "explain" any of the things you list. The issue to explain is why we observe consciousness existing or accompanying these things in the first place (for ourselves). For example, one can imagine a system capable of referencing itself, choosing actions based on that, etc, that isn't conscious. That's a philosophical zombie. So the question is why aren't we all philosophical zombies.
The clearest example of the meta problem is Daniel Dennett, another prominent philosopher, who not only doesn't agree that the problem is hard, but also insists that the consciousness itself is illusion, so there's nothing mysterious to explain in the first place. Quite mind-boggling statement to most people, including HNers, as far as I remember from other threads related to the subject.
I'm not familiar with either author, but this sounds so wrong that I wonder if you misrepresented it, because slightly after a slight modification I would agree, there's no explanation in the end, the misrepresentation of which is trivial, because the consequence is effectively the same. There are two sides of the same medal: we need to refine the model, and we need to skip it to get to the meat.
No that’s exactly what Dennett argues, and that’s why his position is so maddening and infuriating to people who do think there’s a hard problem.
It is like asking about the nature of an apple and being told that there is no apple. Then throwing the apple at their head, only to have the person continue insisting that the apple is a figment of your imagination.
Now you're getting to the realm of torture, pain, and horror. I think that most people can be quickly driven to the point of admitting the reality of the consciousness of pain and horror. This isn't an experiment that would easily get past the ethics board, however.
"We don’t have an objective measure of consciousness. But we can recognize three levels of learning that apply that to our brains and how those create an information processing system that integrates data into a first person perspective. This is how the brain is also a mind with subjective meaning and subjective experiences.
The hard problem of consciousness is that we must rely on our intuitions to judge if such a system is conscious. At the same time, it is highly likely that most systems processing information in similar ways are conscious, whether running on a brain or on a computer."
Outside we could observe how information is flowing, what connects to what, how parts work, how they work together. But never observe the actual feeling of you.
It's like seeing a river flowing, you can measure and describe all kinds of aspects of it. But to get wet, to feel the cold water, to feel the force by which is flows, you have to step into it. Only this river flows in virtual reality, your minds reality, and you cannot step into it.
Intelligence is hard to measure, but I can objectively say what is and what isn't intelligent. It is not the same kind of subjectivity.
And in general, your table idea, that only works for systems that can be in a limited amount of states. You cannot do it for an intelligence test, especially not if I get to redesign the test after you finished your table, same for consciousness.
Thanks, I guess that's exactly the point I wanted to make, but couldn't. Therefore there cannot be an objective measure of consciousness itself, as for objectiveness you need more than one observer. Of course you can measure properties we think are associated with consciousness.
> Intelligence is hard to measure, but I can objectively say what is and what isn't intelligent. It is not the same kind of subjectivity.
Via IQ tests?
> And in general, your table idea, that only works for systems that can be in a limited amount of states. You cannot do it for an intelligence test, especially not if I get to redesign the test after you finished your table, same for consciousness.
You're right you could beat my table by designing a test which is not tabulated. But, when I use a neural network instead of a table I might be able to score a high IQ, even though the network never has seen the test you designed as shown here: https://arxiv.org/abs/1710.01692. I wouldn't call such a network intelligent.
Just the fact that you recognize an IQ test as such, allows me to mark you as intelligent. Very objective. The actual score, sure that is much more subjective.
> when I use a neural network instead of a table
That really depends on the degrees of freedom. Which are unlimited for IQ tests. First I could change things that have nothing to do with the test itself. Like reverse the A, B, C, D order, or put them on the left. Or simply device a never seen before kind of test, instead varying only the geometric shapes.