
The Challenge of Consciousness - lermontov
http://www.nybooks.com/daily/2016/11/21/challenge-of-defining-consciousness/
======
aaimnr
I always point to David Chalmers when someone wants to understand the problem
of consciousness. I like how casual he is about possible solutions and gives
credibility to all the options (unlike eg. Daniel Dennett, who seems almost
religious about how revolutionary his "there's no problem at all" approach
is).

I really recommend this podcast in which Sam Harris talks with Chalmers about
all the options currently in the game:
[https://www.samharris.org/podcast/item/the-light-of-the-
mind](https://www.samharris.org/podcast/item/the-light-of-the-mind)

One of the most promising theories (discussed in the podcast) assumes that
consciousness is a fundamental attribute of reality, the other side of the
coin (matter being the first one). Which leads to the conclusions, that
consciousness is everywhere, but not as 'condensed' as in the brain. Seems
crazy at first, but the more you think about it the more plausible it seems.

Here's a nice paper about it:
[http://rstb.royalsocietypublishing.org/content/370/1668/2014...](http://rstb.royalsocietypublishing.org/content/370/1668/20140167)

Within this paradigm it starts to make sense to call the reality "The Mind",
as some Buddhist schools do. There's also the crazy part, about how under some
conditions it could potentially lead to experiencing other peoples' minds.

For the more adventurous, here's a talk by Culadasa, very experienced
meditator and a neuroscience professor, in which he shares some thoughts about
the problem and also some of these crazy experiences he had:
[http://s3.amazonaws.com/dharmatreasure/150430-tcmc--
culadasa...](http://s3.amazonaws.com/dharmatreasure/150430-tcmc--culadasa--
dharma-talk.mp3)

~~~
Florin_Andrei
EDIT: fixed stupid brain bug

Dennett's approach is absurd. Consciousness is _the only thing in the
Universe_ that is self-evident in the strict sense. Everything else is at
least second-hand stuff - see the brain in a vat hypothesis. To say that
consciousness is an illusion is ridiculous; I can see how he gets there, but
the conclusion is absurd. Somewhere along the way there's a mistake; no, I
don't know where the mistake is, and I could not even begin to hypothesize.
OP's article points out this predicament very well. It is a hard problem
indeed.

> _One of the most promising theories (discussed in the podcast) assumes that
> consciousness is a fundamental attribute of reality, the other side of the
> coin (matter being the first one)._

I can hear the objections being raised already, but it's a neat promising step
towards solving the hard problem. Suddenly the reducibility paradox vanishes.

It would also relieve another problem. If you can't trivially reduce
consciousness to neural activity, then how is it coupled with perception?
There would appear to be a gap between the sensory chain and the fact of
awareness. And then for consciousness to work, it would already have to be
everywhere (in a sense, possibly even a somewhat metaphorical one, or at least
non-trivial). The assumption you mention solves this problem.

> _here 's a talk by Culadasa, very experienced meditator and a neuroscience
> professor_

It's very, very hard to agree with Dennett, and it's getting harder the more
you advance in the practice of meditation. As soon as you realize that you can
peel off and disconnect layers upon layers of perception (external at first,
then the many internal layers too), and also greatly reduce the waves of what
is commonly called "thinking", while consciousness does not diminish but
instead becomes at once more vivid and more stable, more calm and more
intense, less connected to external factors but more broad - all that talk
about "the illusion of consciousness" starts to look extremely suspicious.
It's not perception, and it's not thinking; it can relate to these things, but
it's fundamentally different. It can ultimately exist completely independent
of inputs, either external (sensory) or internal (thoughts, mind activity in
the trivial sense, even memory).

Don't just read about it. Go ahead and have the experience yourself. It
changes a lot of perspectives. You don't even have to go all the way to the
highest levels described in the literature - the intermediate stuff is
revelatory enough already.

~~~
naasking
> To say that consciousness is an illusion is ridiculous

More likely, you simply misunderstand what is meant by "illusion". What is
self-evident is that there are thoughts, and some of these thoughts are "I am
experiencing X". It's naive to simply infer from that, that true subjective
experience actually exists, just like it's naive to infer that water can break
pencils simply because you can see it [1].

You should give Dennet more credit. He's almost certainly right, just like
science proved in every other case humans thought they were special in the
past. The closest analog of consciousness as a pseudo-mystical property that
was eventually simply replaced by a scientific concept is "vitalism", which
was eventually superceded by biology. Not by any proof that elan vital didn't
exist, but by simple recognition that it was special pleading and provided no
explanatory power whatsoever.

[1] [http://etc.usf.edu/clippix/pix/refraction-of-pencil-in-
cup-o...](http://etc.usf.edu/clippix/pix/refraction-of-pencil-in-cup-of-
water_medium.jpg)

~~~
Florin_Andrei
> _What is self-evident is that there are thoughts, and some of these thoughts
> are "I am experiencing X"._

Therein the problem lies. Equating consciousness with thoughts. It's vastly
different. It can exist independently of "thoughts". It can exist
independently of an "I". This can be verified as an experience.

At most it could be a very, very special kind of "thought", radically
different from the rest.

~~~
naasking
> Therein the problem lies. Equating consciousness with thoughts. It's vastly
> different.

Just because qualia seem "vastly different" to thoughts doesn't mean they
aren't reducible to thoughts. This would make the qualitative distinction of
qualia from thoughts an illusion, a false conclusion inferred from an
incorrect perception.

~~~
aaimnr
There's a very distinct experience of being conscious and having completely no
thoughts in the mind. To treat thoughts as some kind of basic element is a
mistake that neither neuroscientist nor meditator would ever make. Look up
default network mode and how it can be 'turned off'.

~~~
naasking
I think you're mistaking the utility of epistemically treating qualia on their
own terms, with their fundamental ontological status. Just because
reducibility may not be _useful_ in most cases of analyzing qualia as a
phenomena reducible to thoughts, doesn't make it _untrue_ that they are
reducible to thoughts.

------
breckinloggins
I promised myself I wouldn't follow the Chalmerian rabbit down the
consciousness rabbit hole for a while (one eventually gets tired of being
dizzy no matter how fascinating the reason). Every time I swear to myself that
I'll give it a rest for a few months, something like this pops up on HN and it
starts all over again.

Here's as far as I got last time. I'm almost certainly wrong, but hey, when it
comes to consciousness, we know so little that being able to pinpoint why
someone is wrong is still progress.

\- Time is real but not "absolute". The universe exists as a causally
connected network of ever-present nows. We aren't "slices of the Minkowski
spacetime" experiencing itself. That's a fantastic model of reality at certain
scales, but it's not reality.

\- Ensembles of causally connected physical systems evolving in the now are
"ontoglogically real" in the sense that something about them actually exists
above and beyond their parts.

\- Certain evolving physical systems have "an interior". It feels like
something to be the interior of such a system.

\- We don't experience the world, we experience being the interior of a
certain kind of evolving physical system whose job it is to create, process,
and manipulate representations. "We" at any moment are the interior of the
gestalt representation of that moment.

\- Evolution selected and maximized interiority for a reason. It has a
purpose. How it has a purpose I have no idea, and any solutions smack of
dualism. However, the closest thing I have to a hypothesis that isn't
completely crazy is that what we call "the physical" and "the mental" are
simply two projections of some "actual" underlying structure that is something
like a Hilbert space. If that's the case, then it's possible that putting the
physical system into a certain state causes the interior of that system to
"feel" something which, in turn, causes some kind of "pressure" on the
physical again. I have no clue how this could work, but I do like the strategy
of starting with the commonsensical but heretical idea that we wouldn't be
conscious if evolution didn't find a use for that property, and then going
from there.

Anybody else have any crazy thoughts so we can at least talk about why we're
wrong about them?

~~~
illvm
What about the idea often proposed by Graham Hancock where consciousness is
not generated in the brain but rather the brain is a sort receiver for it?

~~~
gnaritas
That's an idea with zero evidence and thus indistinguishable from fantasy.

------
RobertoG
Most of the times I read about this subject I get the feeling that every
person in the discussion is talking about different things.

For a prosaic view of consciousness I would recommend the book "Consciousness
and the Brain" by cognitive psychologist Stanisas Dehaene. The experiments
described in the book do, in my opinion, a good job in delimiting the problem
and in facilitating accurate definitions about what conscience is.

We should start there, otherwise we are talking about how many angels can
dance in the head of a pin (1).

(1) - Actually that problem has been solved already:
[http://www.improbable.com/airchives/paperair/volume7/v7i3/an...](http://www.improbable.com/airchives/paperair/volume7/v7i3/angels-7-3.htm)

~~~
breckinloggins
That's one reason why I've started using the term "interiority" to describe
the bedrock "existence of a feeling of what it is like" phenomenon. It's the
smallest word I can think of that still conveys the essence of the hard part
of the hard problem.

Put another way, I'm of the belief (and I think Chalmers and many others share
it) that if you can demonstrate to me that C. Elegans feels itself wiggling
around, and you can show me why it feels anything at all and why it feels this
way vs that way, then you have solved the hard problem. I could take your
technical specifications for basic interiority (the solution to the hard
problem) and then give you human-level reflective consciousness in a few years
given a handful of engineers and sufficiently powerful hardware.

This isn't necessarily a popular opinion, though. I once read Julian Jaynes
"The Origin of Consciousness in the Breakdown of the Bicameral Mind" and was
really frustrated that the author spent the entire book explaining how we
became "conscious" by listening to one half of our brain talking to the other
half. How much time did he spend trying to explain how there was anything
doing the "hearing" to begin with? Nearly zero. Maddening.

~~~
naasking
> That's one reason why I've started using the term "interiority" to describe
> the bedrock "existence of a feeling of what it is like" phenomenon. It's the
> smallest word I can think of that still conveys the essence of the hard part
> of the hard problem.

A more widely used term is "subjectivity".

> How much time did he spend trying to explain how there was anything doing
> the "hearing" to begin with? Nearly zero. Maddening.

This seems like a common sentiment, but I'm not sure it's justified. You're
already assuming the existence of a subject just because of an observation,
but that needs justification. That belief is just an inference, a chain of
thoughts sourced from a thought and yielding the final thought "I experienced
X". The question is really whether that chain of inference is valid or
fallacious, in which case subjectivity is an illusion.

I suggest reading some scientific attempts to account for subjectivity [1].
Our approaches to consciousness amounts to marvelling at a pencil sitting in a
glass of water [2] and trying to figure out how water seems able to break and
reconstitute the pencil, when we should really be developing the theory of
light refraction. Like biology did to vitalism, all this mysticism surrounding
consciousness will soon just fade into history.

[1]
[http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00...](http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00500/full)

[2] [http://etc.usf.edu/clippix/pix/refraction-of-pencil-in-
cup-o...](http://etc.usf.edu/clippix/pix/refraction-of-pencil-in-cup-of-
water_medium.jpg)

------
SubiculumCode
You can cut off any part of the body and still have consciousness as long as
blood, oxygen, etc reach the brain. That seems a little stronger evidence of
causality than mere correlation. Still it could be, I suppose, that the brain
does not hold consciousness, but is in fact merely an antenna to send and
recieve consciousness... probably from a lab at NASA, via interactions with
virtual particles, providing propellentless mental acceleration across space
and time

~~~
drdeca
And of course, there's occasionalism, which holds that the mind and the body
(Including the brain) match up without the mind actually having a causal
influence on the body at all.

I don't think this removes responsibility for actions though, because even if
this is true, whatever we choose to do is still what our bodies do,so we can
still choose our actions. Like how two superrational players of the prisoners
dilemma can choose the action of their coplayer by choosing their own action.

------
brianberns

      How can the room I am sitting in be simultaneously out
      there and, as it were, inside my head, my experience? We
      still have no answer to that question.
    

Really? It seems quite obvious that events "out there" can affect what's going
on "in here" (i.e. my brain) via sound, light, direct contact, etc.

Consciousness is a difficult problem, yes, but there's no need to make it seem
even more difficult than it actually is.

~~~
breckinloggins
I share your confusion with this part of an otherwise-excellent article. It
was such a surprise to read it that I fear I'm missing something. It seems
obvious that what we're experiencing isn't the world or any so-called
"objects" in it, but a structured representation of the whole and its parts.
The hard part is understanding why:

1\. Representations experience themselves at all

2\. Representations seem to feel different depending on their structure and
relationships to other representations

Does anyone have a better interpretation of this paragraph that highlights the
supposed mystery about our connection to "the real world"?

~~~
aaimnr
If you both want to see the mystery replace 'the real world' with 'noumenon'
and read some Immanuel Kant. He wrote awfully lot about the 'supposed mystery'
:)

BTW. your parent seems to mistake "inside" with "the brain" which seems like a
category error.

"It seems obvious that what we're experiencing isn't the world or any so-
called "objects" in it, but a structured representation of the whole and its
parts." What do you mean by "the whole and its parts"? Do you mean that what
we experience for our whole life is just a play of representations created by
our mind? Because it seems to be the case. And when you think about it, it is
as mindfucky as possible. It may be obvious for you (is it?), but most people
act as if it were very far from obvious as in each action most of us attribute
real, independent existence to most constructs of our minds that, under
scrutiny, are completely arbitrary.

Once you understand that you construct the whole world, you also start to
understand how many ways to construct it are there (Sapir-Whorf etc.). Once
you understand how fundamentally these possible understandings may differ, you
start to appreciate how arbitrary 'your world' is. And then you start to
wonder WTF is 'real world'? What is 'out there'? Is there anything at all?

Just a loose association, but isn't your question a bit like asking "what's
wrong with this picture" ?
[http://personalpages.to.infn.it/~fiorenti/escher/pgallery.gi...](http://personalpages.to.infn.it/~fiorenti/escher/pgallery.gif)

On the other hand I understand your question perfectly. It's even hard to tell
whether there is something strange about it or not.

~~~
brianberns

       BTW. your parent seems to mistake "inside" with "the
       brain" which seems like a category error.
    

I was just paraphrasing the article itself, which says "inside my head". I
think it's pretty clear what this means.

~~~
aaimnr
Sure, fair enough.

------
ozy
Consciousness is the way your brain self corrects. That is my personal theory.

On a few time scales you brain makes plans. It uses beliefs and a model of the
world to make those plans. It stores those plans and the actions it has taken.
And then checks if actions had their intended effects, and if plans reached
their intended goals. Updating its model of the world, its beliefs, and what
useful actions are.

This is a constant process reflecting upon the current situation, and upon the
recent past of actions and plans. It is like an observer observing itself.
This is the root of consciousness, I think.

Probably abstract thinking and language, enough intelligence, and a theory of
mind, leads to self awareness.

If this is correct, we can formulate some necessary parts for a system to be
conscious:

* recognition of the environment * a model of the environment * ability to reason about actions, their consequences, and how to achieve goals * memory of past plans and goals and outcomes * a process of executing actions, making new plans, and learning from the outcomes

~~~
aaimnr
Self-awareness and consciousness are different things. Sometimes when you wake
up you don't know who you are, where you are, what you are, but there is a
conscious experience. Consciousness is much more fundamental than your level
of reasoning allows to explain.

Here's a list of things that you are adressing and that are not dealing with
the hard problem of consciousness ('easy problems'):
[https://en.wikipedia.org/wiki/Hard_problem_of_consciousness#...](https://en.wikipedia.org/wiki/Hard_problem_of_consciousness#Easy_problems)

~~~
ozy
An observer observing itself, would that not feel like something to the
observer?

What you describe is when the observer is disconnected from longer term
memory.

~~~
aaimnr
OK, let me rephrase my objection.

"This is a constant process reflecting upon the current situation, and upon
the recent past of actions and plans. It is like an observer observing itself.
This is the root of consciousness, I think."

Your description seems to fall under "the ability of a system to access its
own internal states" and/or "the reportability of mental states" from the
'easy problems' section of the wiki pages I linked to.

Both could be possible without consciousness at all. We could imagine (or even
program) a robot that in response to external stimuli modifies its state. It
could also have second order algorithms, eg. it could check what state it's in
and if it remains in the same state for too long, the state changes randomly.
It would be the same as 'observer observing itself'.

This functional explanation in any way doesn't lead to the phenomena of
consciousness, which would require the robot to have some specific first-hand
'experience' of the world and itself. That's the difference between the easy
and the hard problem of consciousness.

~~~
ozy
Maybe this thread is not yet dead, seeing a reply a few hours ago. I did
intent to reply, but life happened I suppose.

I am curious about your thoughts. Here is why I think it does address the so
called hard problem:

The goals of these plans are evaluated by simulating the world and the body
and future mental states. So the brain can measure, or "feel", the response,
and thus judge the plans and compare them.

This constant self reflection and self prediction is what feels like something
to the thing doing it. Why would it not?

\---

But in general I have some objection to the "hard problem".

We cannot proof something to be conscious. Probably we never can. The best we
can do is find good proxies indicators of consciousness and good lower bounds
for what can potentially be conscious, and what cannot.

But the opposite is also true, we cannot proof there is such a thing as the
hard problem of consciousness. For all we know every program we have every
written has had some kind of experience as it was running.

I agree it is reasonable to assume this is not the case. But I don't think you
can proof this is not the case. Therefor the hard problem might, or might not,
actually, be a problem.

~~~
aaimnr
To predict no 'feeling' need to be involved. You can have prediction without
consciousness (any program that simulates some environment to check whether
some condition will be met, eg. collision detection in 3d games) and
consciousness without prediction (there probably are people with brain damages
that are conscious, but have limited imagination capacities).

You can see color of red and don't predict anything at all, while still being
consciouss. Predicting is just a narrow part of our conscious activities,
however all of them are conscious.

So cosciousness is something much much more basic and fundamental than any
high order cognitive function.

> We cannot proof something to be conscious. Probably we never can. The best
> we can do is find good proxies indicators of consciousness and good lower
> bounds for what can potentially be conscious, and what cannot.

We can't prove it because we have no idea what consciousness is, even on a
philosophical level. Moreover if consciousness is basic property of reality,
in a way parallel and coexisting with another property - matter, as
panpsychists suggest, than it indeed would be impossible, as this dimension
would be completely 'invisible' from the matter point of view.

When you're talking about 'proving' most likely you assume existence of some
reproducible causal chain in the material, 'intersubjective' space. So you
can't 'prove' consciousness in this paradigm by its very definition.

Of course one can insist to throw away any theory that's not provable in our
paradigm, but does it bring us any closer to the understanding of
consciousness? Also, is there anything inherently wrong with neutral monism or
naturalistic dualism? These theories are not internally inconsistent, moreover
they are also not inconsistent with anything current physics states - they
just provide a wider view that at least somehow addresses the notion of
consciousness.

~~~
ozy
It think you took predicting a bit too far and singular. It is not just
predicting that is conscious, and it is more simulating or considering or even
pattern matching.

This is what I think: Consciousness is continuously self observing past
actions, past plans, actual outcomes, planning new actions and predicting
future outcomes. That feels like something to the thing doing such a process,
because practically all of those steps involve simulating/predicting/checking
what something means to itself. Our brains do this because it is a self
correcting mechanism.

> You can see color of red and don't predict anything at all, while still
> being consciouss.

I don't think so, when you see that color, you cannot help observing its
context, what it means, check if something needs to be done, etc. All that is
what I meant with predicting. Maybe the color is of no significance, but the
only way you figured that out was by "predicting".

\---

Even if consciousness is 100% material (as I suggest), we can not prove it
exists in others. We don't need consciousness as another dimension for it to
be improvable.

As a matter of fact, if it is another dimension, then our brains are
"interacting" with that dimension. So the hypothesis predicts that certain
configuration of matter can have some kind of interaction with that dimension.
That should not be hard to proof.

But since we have no such proof, and more mundane hypotheses, we must conclude
that the other dimension hypothesis is extremely unlikely.

Another line of thinking on the dimension hypothesis is this: we have brains
as small as from C. Elegance with 300 neurons, to ants, 0.25 million neurons,
to mice, 70 million neurons, to humans, 86 billion neurons. At which neuron
count does consciousness come in? What if we simulate the full neurons of any
of these, will it behave the same? Will it be conscious the same?

------
johndoe4589
Riccardo Manzotti has some pretty cool Philosophical Cartoons on his site:

[http://www.consciousness.it/RM_Cartoons.php](http://www.consciousness.it/RM_Cartoons.php)

His "spread mind" site is great too:

[http://www.thespreadmind.com/](http://www.thespreadmind.com/)

As a "seeker" I sometimes wonder if he figured out something close to what
nonduality teaches, only in a more technical language (ie. consciousnsss is
not located inside anymore than it is outside, therefore somewhat blurring the
lines between I and the world). I think his example of the rainbow is such a
great metaphor.

Bernardo Kastrup has his theories as well with his whirlpool" metaphor but to
me it sounds like it makes sense only to him, and I don't get much out of his
metaphor. Whereas Riccardo's pointers are reminiscent of older teachings in
that they really invite the reader to examine direct experience (Just my
interpretation of his theory).

------
exolymph
A similar discussion that might interest some of you:
[https://news.ycombinator.com/item?id=11956101](https://news.ycombinator.com/item?id=11956101)

------
andy
According to the article, there are 85 billion neurons in the brain. Bill
Gates has a net worth of 85 billion dollars. That's interesting I think. When
I read 85 billion neurons I thought that's not really that much
considering.... and I wonder how much Mark Zuckerberg is worth - then I
Googled who's the richest and saw Bill Gates had 85B.

------
mehwoot
Is this an ongoing series or something? It stops abruptly at the end...

------
logicallee
you can read the thread where I report what I consider to be basic truth about
consciousness here:

[https://news.ycombinator.com/item?id=12878939](https://news.ycombinator.com/item?id=12878939)

there are no challenges, and even simple numbers can be conscious. we have
sequenced DNA today, you can download it right now:
[http://www.sanger.ac.uk/resources/downloads/human/](http://www.sanger.ac.uk/resources/downloads/human/)

After downloading it go ahead and take a checksum.

within 500 years (a ridiculous overestimation) someone _will_ build a
simulation that simulates (emulates) part of the brain that is described in
that genome. Whether at full speed or one-tenth speed it will be done, and for
test purposes someone might do it in a determinstic VM of sorts. They can take
that checksum, let's say it's:

3ad235db427e566bbb31097c77882d20f30db779ff8f3896cb35bb43ebf76dd164e084109bf2bcb8a9ba1bc7477c1eeb3dfcf699bc3f8e7f4d96aceeaa080003

(That's an SHA-512 hash).

I'll tell you what I hashed. All I hashed was "I'm a brain".

Instead of a text string, within 500 years someone _will_ hash a VM that
contains as much data as a human brain, _is_ human for practical purposes, and
reports the same thing.

For me personally, this is beyond even discussion and I personally consider
the matter closed. However, at hte HN link at the top of this comment I report
a state of affairs under which I would be wrong. I don't consider it even
worth thinking about though.

I don't consider any question around this area to be "open". We just have to
accept what we deduced.

if you don't like it or don't agree with it, you can wait until someone
simulates an adult brain reporting self-consciousness, and then you will be
forced to say, "oh well, I was wrong."

There is no conceivable scenario under which I would have to say, "oh well, I
was wrong." (Above, I outline such a scenario though. It's not going to
happen.)

------
visarga
Consciousness is a suitcase word, poorly defined. Instead of consciousness,
how about we consider more clear notions such as perception, decision, reward,
action, memory and attention - all of them developed through learning. Such
notions can describe the "mind" completely. They are much more concrete, and
are being investigated in AI (reinforcement learning).

We are just self replicating, self preserving processes. The main function is
to maintain equilibrium in the face of entropy. We need food, water, shelter
and companionship to survive. In order to get those, we need to learn to
operate in the world and reason about it. It's nothing magical, just
reinforcement learning and other kinds of learning - a little semi-supervised
learning, for example, and a lot of unsupervised learning.

All this grand system exists for the sole purpose of protecting its existence.
It protects its own life and self replicates, which is another kind of
survival. Consciousness is that which protects the body, that is its sole
purpose.

If you are not conscious in the morning, you don't drink and eat, and in 3
days you're dead. That's why you need to be conscious every day. That's the
greatest miracle of consciousness. Our species would not exist without it. But
it's better to work with clear cut notions, like perception, action and
reward.

~~~
aaimnr
There's an easy and a hard problem of consciousness. You're addressing the
easy one. These are technical terms, you can look them up.

~~~
visarga
The hard problem is "why do we have qualia?" and it is explained by neural
nets. You input sensations and you get latent representations that are mapped
into a space of possible representations. So all those ineffable perceptions
are just values in this space of representations.

Thus, qualia appear by neural net processing of sensations, and are fed into a
recurrent network that judges them moment to moment and updates its internal
state, as well as perform actions. That is why we feel a stream of
perceptions, not just separate moments.

Qualia appear to be irreducible because the mapping from perceptions to
representations is nonlinear - even if we can duplicate it in neural nets, we
still can't assign semantic value to each neuron and weight in the network. So
it appears magical because it is of a level of complexity that exceeds the
human working memory. But it is not so magical that we can't replicate it. We
can recognize objects better than humans on certain datasets, and we have word
embeddings built on huge corpora of text that are being used for translation.
We can compute the "feel" of a word or image.

~~~
aaimnr
This explanation doesn't touch the hard problem. Please have a look at the
'easy' problems:
[https://en.wikipedia.org/wiki/Hard_problem_of_consciousness#...](https://en.wikipedia.org/wiki/Hard_problem_of_consciousness#Easy_problems)
\- you're still within the functional mindset of these problems.

Are neural nets conscious of their representations? If not - why? What's the
difference between neural net that has subjective experience of its
intermediate representations and the one that doesn't? On a functional level
it doesn't need any subjective experience, so why would it explain anything?

Attempts like this definitely give us some intuitions, but they are working
around the hard problem without addressing it.

~~~
meshr
I never understand this ‘qualia’ problem. For me, it looks like it is not
scientific at all due to occam's razor principle. Is there smth like Turning
test for ‘qualia’?

