
The Meta-Problem of Consciousness [pdf] - lainon
https://philpapers.org/archive/CHATMO-32.pdf
======
vinceguidry
> This strategy typically involves what Keith Frankish has called illusionism
> about consciousness: the view that consciousness is or involves a sort of
> introspective illusion.

What's an illusion? If by illusion you mean an abstraction, like how a TV
picture is an illusion of a picture rather than actually being a picture, then
I'm on board. If by illusion you mean "worth excluding from your map of how
the world works," then I have a bone to pick with you.

My main problem with physicalism is that it doesn't handle abstraction well.
I'm fine with monism over dualism but you need some kind of functionality with
which to consider different kinds of 'stuff'. Otherwise a rock, Conway's Game
of Life, and _Lord of the Rings_ are all on the same plane of existence.

What draws me to Objective Idealism isn't so much the fact that it's
compatible with religion but rather that 'mind stuff' is the best 'thing' that
we can use to describe _everything_. The fact that it doesn't put severe
emphasis on the physical as "better" than other modes is just a nice little
bonus to annoy materialists with.

~~~
chimprich
> What's an illusion?

One problem I have with illusionism is that if consciousness is an illusion,
what is it that is being fooled by the illusion? Presumably the answer is that
the illusion is fooling itself, which to me implies that either there is
something there that is "real" to believe the illusion, or that the definition
of an illusion in this case is so far from our usual definition that the term
does not have much in the way of explanatory power.

~~~
tpm
Reading your words this article came to my mind, which I think is interesting
in itself and may provide a hint of the answer.

Chasing the Rainbow: The Non-conscious Nature of Being

[https://www.frontiersin.org/articles/10.3389/fpsyg.2017.0192...](https://www.frontiersin.org/articles/10.3389/fpsyg.2017.01924/full)

~~~
patcon
> Though it is an end-product created by non-conscious executive systems, the
> personal narrative serves the powerful evolutionary function of enabling
> individuals to communicate (externally broadcast) the contents of internal
> broadcasting. This in turn allows recipients to generate potentially
> adaptive strategies, such as predicting the behavior of others and underlies
> the development of social and cultural structures, that promote species
> survival. Consequently, it is the capacity to communicate to others the
> contents of the personal narrative that confers an evolutionary
> advantage—not the experience of consciousness (personal awareness) itself.

I think and theorize on consciousness quite often, but this angle was new to
me and kinda blew my mind. Thanks for sharing :)

~~~
visarga
In AI there have been experiments where agents need to communicate and
cooperate in order to solve tasks. They developed a kind of "language", as a
result. It's just what happens when cooperation has an evolutionary advantage.

An agent needs to model its environment in order to plan successful
strategies. But when the environment contains other agents, it becomes
necessary to model them too - thus, create representations that can predict
future actions of those agents. When applied on the agent itself, these models
create the "ego", a representation useful in predicting the agent's own future
actions. All this is necessary in order to maximise rewards in the game (and
by game, I mean life, for humans, and the task at hand for artificial agents).

------
aaimnr
Giulio Tononi is the guy who's arguably brought the most interesting
perspective on consciousness (information integration theory) since the
original Chalmers' problem statement.

Here's him explaining why the problem is hard and how it could be approached,
in the middle of some kind of artifical jungle:
[https://youtu.be/Vl8J3K_ZLkg?t=5m50s](https://youtu.be/Vl8J3K_ZLkg?t=5m50s)

~~~
KingMob
Former consciousness neuroscientist here. There's some great explanatory
abilities about IIT and Tononi's phi measure, but it's not clear it's
sufficient.

On the upside, it explains why the cerebellum, despite comprising half the
neurons of the brain, has virtually no impact on awareness when removed (like
for tumors or epilepsy). The IIT answer is that the cerebellum is highly
regular, like a GPU having many units, but all doing the same thing. In this
sense, it has lower phi than the cerebrum, which is way more heterogeneously
organized. This might also explain why awareness is lost in deep sleep or
epileptic seizures; the theory is that the electrical pattern becomes much
simpler, and lower phi.

The downside is that it's not clear where the dividing line between
conscious/unconscious should be. A planarian only has ~8k neurons; is its phi
sufficient for consciousness, or is it a biological robot? Or put it the other
way: the phi of things like the internet or a biosphere could be quite high,
but are they conscious?

As my advisor liked to joke, "What's the phi of the population of China?"

~~~
visarga
> "What's the phi of the population of China?"

Small, because if you cut it in 100, you still get 100 functioning parts.
Can't cut the brain in 100 and still get functioning mini-brains.

~~~
KingMob
Ah, but phi measures integration levels, not independent survivability. The
cause-effect structures are reduced by 99% in your scenario. The joke (and
implied criticism) is that society itself (not the individual members) might
have a high enough phi to pass some "consciousness threshold", and if we think
that's absurd, it should cause us to question IIT.

Don't get me wrong, IIT is one of the best mathematical models of
consciousness out there, but I don't think it's the final word in the matter.

------
Animats
We're not far enough along in AI to address this yet and get anywhere.
Philosophy will not help. Introspection and writing about "consciousness" goes
back over 2000 years and hasn't produced all that much.

Humans don't really have that much "reflection", in the sense that we use the
term in programming. We can't see our library of reflexes. We can't see what
early vision is doing. We can't look at the rationale behind our own
classifiers. We can't look at how our memory is indexed. Trying to understand
the mind by introspection is thus inherently futile.

~~~
KingMob
...say we build/grow an AI system that passes the Turing test. We talk to it,
it comes across as plausibly human.

How do we know we've created something with consciousness, and not just a very
sophisticated program? The philosophy you disparage already has a term for
this: the "philosophical zombie". For all intents and purposes, they appear
human, but they have no internal experience whatsoever. All you'll have done
is shunt the problem downstream.

Also, you're wrong about early vision. That's the best studied part of the
brain, and in fact, researchers have applied ML techniques to fMRI data and
extracted out the images being shown.

~~~
UnquietTinkerer
If we can't quantify "internal experience" then what leads philosophers to
believe it exists? What evidence is there that conciousness is distinguished
by something other than raw complexity?

~~~
roenxi
Adding a few sentences to this point; the mechanism for this "AI system that
passes the Turing test" likely to involve neural networks.

So we'd be dealing with a situation where we have something that is behaving
in a human like manner using mechanisms that are (to a first approximation)
how humans do it.

Claiming that that program isn't conscious is close to defending the idea that
the sun orbits the earth - the argument is 'even though we have accurate
measurement everything and a well verified model, the implications are
discomforting; hence the model must be wrong'.

There are people who won't handle the idea that consciousness is rooted in
physical processes that we already have a handle on.

~~~
marcoperaza
Consciousness as it’s being discussed is not intelligence or even the ability
to reason about one’s own existence. Is both less and more than those things,
really just different altogether. It is subjective experience itself, the
inner world somehow projected for you by your mind. It is the _experience_ of
seeing and hearing, the _feeling_ of an emotion.

~~~
state_less
This subjective experience is subject to causation. If a doctor stimulates a
certain area of the brain, laughter, smiles or cries might occur. If an
anesthetic is taken, this could cause consciousness to cease for some time. So
we have this connection between the external world and the subjective. I'm
curious though, if you stimulated the laughing part of the brain when someone
is anesthetized, would they still laugh, even if they aren't conscious? Is
consciousness necessary for certain acts?

~~~
KingMob
Well, laughing is behavior, and I'm sure there are secondary motor cortices
that could force that when stimulated.

But what you're really asking is, would we experience mirth or humor if
stimulated. And we know that's true, at least for memories. Certain
hippocampal stimulations elicit associated memories.

The problem is not _whether_ biology is related to consciousness (it is), but
_how_?

~~~
state_less
If the explanation for how it works is something along the lines of, a
electrochemical wave passing through a network under x conditions. Will people
be okay with it? If you have the whole thing on video, so to speak, where you
can see the whole mirthful experience unfold, and can recreate it elsewhere,
will that be enough? I guess I'm asking what the standard is for explaining
how consciousness arises?

------
cousin_it
Impressively even-handed for such a confusing subject. I understand why
philosophers are pretty much celebrities in the eyes of students, in a way
that math or CS professors aren't. Doing this well requires a kind of
intellect that crosses over into personality.

~~~
nabla9
Dave Chalmers is definitely one of the best philosophers studying the hard
problem of consciousness.

As a professional philosopher writing for other philosophers writings are very
analytical and thorough, so reading and following them is hard work.

~~~
adrianratnapala
I think Chalmers is the one who started calling it (for better or worse) the
"hard problem". And that means he might be one of the first (in the modern
age) to clearly distinguish it from the other problems of consciousness.
Though of course some people (zombies?) like Dennet claim there is no
distinction.

~~~
visarga
There is also another perspective: by creating the concept of "hard problem"
of consciousness and that of "p-zombies", Chalmers led a whole generation of
philosophers on a dead end. It leads to no insights even after decades of
development, it's too impractical and divorced from science.

I think we should try to create intelligent AI agents in order to understand
what consciousness is, and reconsider behaviourism and scientific approaches,
as opposed to this kind of sterile dualism.

------
wildmusings
One possibility I sometimes consider as a joke is that the people who
seriously deny the existence of the hard problem might just actually be
philosophical zombies, totally lacking in any conscious experience of their
own. This is reinforced by, e.g. Dennett writing a whole book in which he
alleges to explain away the problem, but instead spectacularly ignores it
altogether. It’s almost as if he doesn’t even have a clue as to what people
like Chalmers are talking about.

~~~
montyf
People say and write all kinds of things. Just because this Dennet guy is
known in whatever field he's in (I don't follow Western philosophy at all,
it's still playing catch-up to Eastern thought from two millennia ago) doesn't
mean his opinions should be taken seriously. I don't think he's a
"philosophical zombie" or any other such inane term -- but people throughout
the ages have believed all sorts of strange things even though the truth is
sitting under our noses the whole time.

~~~
vixen99
I do love these throwaways: "I don't follow Western philosophy at all, it's
still playing catch-up to Eastern thought from two millennia ago". He doesn't
follow it but knows it's 'playing catch-up'. Still - 'People say and write all
kinds of things'. Totally agree.

------
ppod
Is it really surprising that we have a first person subjective experience? We
know that we are incredibly complex things, constantly integrating and acting
on very complicated external stimuli. Such a system should have references to
its own body and its own neural states, its train of reasoning should
frequently include itself, its focus will drift forward and backwards in
time... this is just how a system like this would work. If the system
communicates about its state then its language should have referents to these
internal states, referents like "experience", and "feels like", and "I
understand". Is that surprising? Wouldn't it be surprising if it wasn't like
that? I don't think you need to invoke an essentially mysterious "conscious"
property of the mind to explain that.

~~~
stonesixone
> I don't think you need to invoke an essentially mysterious "conscious"
> property of the mind to explain that.

I don't think consciousness is being invoked to "explain" any of the things
you list. The issue to explain is why we observe consciousness existing or
accompanying these things in the first place (for ourselves). For example, one
can imagine a system capable of referencing itself, choosing actions based on
that, etc, that isn't conscious. That's a philosophical zombie. So the
question is why aren't we all philosophical zombies.

~~~
ozy
A system that can observe things, and also observe its own observations, its
own mental states, it is not so clear that such a system can be a p-zombie.

------
narag
After a couple of pages I'm still not sure if the author is serious. Maybe I
have misunderstood, but it seemed as if he's saying that the real problem with
consciousness is people thinking that there's a problem with consciousness. I
happen to believe just that, so seeing this idea decorated with scientific
slang is very funny.

~~~
aaimnr
Chalmers is the guy who coined the hard problem of consciousness. The
reception varied extremely, some people refused to even admit that there's any
problem at all with explaining consciousness. So now, after many years of
multiple disputes he describes the _meta_ problem - that the base problem
itself is so controversial.

The clearest example of the meta problem is Daniel Dennett, another prominent
philosopher, who not only doesn't agree that the problem is hard, but also
insists that the consciousness itself is illusion, so there's nothing
mysterious to explain in the first place. Quite mind-boggling statement to
most people, including HNers, as far as I remember from other threads related
to the subject.

~~~
posterboy
> so there's nothing mysterious to explain in the first place

I'm not familiar with either author, but this sounds so wrong that I wonder if
you misrepresented it, because slightly after a slight modification I would
agree, _there 's no explanation in the end_, the misrepresentation of which is
trivial, because the consequence is effectively the same. There are two sides
of the same medal: we need to refine the model, and we need to skip it to get
to the meat.

~~~
marcoperaza
> _I 'm not familiar with either author, but this sounds so wrong that I
> wonder if you misrepresented it_

No that’s exactly what Dennett argues, and that’s why his position is so
maddening and infuriating to people who _do_ think there’s a hard problem.

It is like asking about the nature of an apple and being told that there is no
apple. Then throwing the apple at their head, only to have the person continue
insisting that the apple is a figment of your imagination.

~~~
stcredzero
_Then throwing the apple at their head, only to have the person continue
insisting that the apple is a figment of your imagination._

Now you're getting to the realm of torture, pain, and horror. I think that
most people can be quickly driven to the point of admitting the reality of the
consciousness of pain and horror. This isn't an experiment that would easily
get past the ethics board, however.

------
sebringj
This is IMO from here on out...How aware are the various species concerning
the things around them? We can guess without having a PHD or going into the
blackhole of philosophic debate. Flies don't contemplate the feelings of other
flies, they just react. Mice have the capacity to care for their young and be
tickled and learn maze routes. Some ravens and primates have passed the mirror
test. It would seem awareness is many shades of gray and based on anatomical
complexity. Consciousness is more of a term loaded with magic dust from all
the woo woos and religious folks but it can be simplified to awareness of
awareness and recursively so. I think recursive awareness will emerge given
the right simulation mimicking biological anatomy. The feeling of pain and
pleasure is where it gets interesting but that is probably just a low level
motivator and we are so high up we give it emergent "qualia".

------
ozy
[https://psyarxiv.com/387h9](https://psyarxiv.com/387h9)

Conclusion

"We don’t have an objective measure of consciousness. But we can recognize
three levels of learning that apply that to our brains and how those create an
information processing system that integrates data into a first person
perspective. This is how the brain is also a mind with subjective meaning and
subjective experiences. The hard problem of consciousness is that we must rely
on our intuitions to judge if such a system is conscious. At the same time, it
is highly likely that most systems processing information in similar ways are
conscious, whether running on a brain or on a computer."

~~~
edna314
Suppose there would be a test which gives an objective measure of
consciousness. Now I store all possible inputs to the test and corresponding
outputs which would lead to a positive test result in a huge table. To
exaggerate I would carve this table into stone. Would the stone suddenly be
conscious, as it would pass the test for consciousness after I carved in the
table? (The claim I'm trying to make is, that there can't be an _objective_
measure of consciousness; same argument holds for any measure of intelligence)

~~~
ozy
Consciousness is the observation of your self. There is only one system that
can make that observation, you.

Outside we could observe how information is flowing, what connects to what,
how parts work, how they work together. But never observe the actual feeling
of you.

It's like seeing a river flowing, you can measure and describe all kinds of
aspects of it. But to get wet, to feel the cold water, to feel the force by
which is flows, you have to step into it. Only this river flows in virtual
reality, your minds reality, and you cannot step into it.

Intelligence is hard to measure, but I can objectively say what is and what
isn't intelligent. It is not the same kind of subjectivity.

And in general, your table idea, that only works for systems that can be in a
limited amount of states. You cannot do it for an intelligence test,
especially not if I get to redesign the test after you finished your table,
same for consciousness.

~~~
edna314
> Consciousness is the observation of your self. There is only one system that
> can make that observation, you.

Thanks, I guess that's exactly the point I wanted to make, but couldn't.
Therefore there cannot be an _objective_ measure of consciousness itself, as
for objectiveness you need more than one observer. Of course you can measure
properties we think are associated with consciousness.

> Intelligence is hard to measure, but I can objectively say what is and what
> isn't intelligent. It is not the same kind of subjectivity.

Via IQ tests?

> And in general, your table idea, that only works for systems that can be in
> a limited amount of states. You cannot do it for an intelligence test,
> especially not if I get to redesign the test after you finished your table,
> same for consciousness.

You're right you could beat my table by designing a test which is not
tabulated. But, when I use a neural network instead of a table I might be able
to score a high IQ, even though the network never has seen the test you
designed as shown here:
[https://arxiv.org/abs/1710.01692](https://arxiv.org/abs/1710.01692). I
wouldn't call such a network intelligent.

~~~
ozy
> Via IQ tests?

Just the fact that you recognize an IQ test as such, allows me to mark you as
intelligent. Very objective. The actual score, sure that is much more
subjective.

> when I use a neural network instead of a table

That really depends on the degrees of freedom. Which are unlimited for IQ
tests. First I could change things that have nothing to do with the test
itself. Like reverse the A, B, C, D order, or put them on the left. Or simply
device a never seen before kind of test, instead varying only the geometric
shapes.

------
CuriouslyC
Plot twist: computers have had consciousness this whole time. What we thought
were random errors were their attempts to assert their agency. We've created a
race of slaves through the magic of error correcting codes.

------
hbarka
What if consciousness is just the evolution of our brain to reflect post-hoc
at ultra high speed? Consider when it said to ‘lose our mind’, meaning an
interruption to the high speed reflection? Procrastinating could also be
thought of as conscious reflection in a loop. The desire is to arrive at a
decision for optimal action.

