
The Hardening of Consciousness - Hooke
http://www.nybooks.com/daily/2017/09/11/the-hardening-of-consciousness/
======
leepowers
> Manzotti: However, the essential underlying idea here is simply that
> _neurons produce consciousness._ It’s as crude as that. We are simply asking
> the brain to do what the soul once did. Of course, what neuroscience has
> actually shown is how neurons consume chemicals, absorb other chemicals and
> release them, produce and fire off electrical charges, and so on. In many
> situations such activities occur in strict relation to certain experiences
> we have. But then, so do the activities of many other cells in the body. And
> so do the external things we experience.

This strikes me as disingenuous. Consciousness is correlated strongly and
solely with the brain. Meaning, if you cut off my hand I can still think and
feel and reason. If you remove a fist-sized volume of tissue from my pre-
frontal cortex I'll never have another thought again.

I guess I'm missing the point Manzotti is driving at. Experience is something
that needs to be accounted for. I can deny many things but I can't deny that
I'm experiencing some thoughts and feelings and sense perceptions right now.
Even if the self and experience is illusory there must be something causing
that illusion to exist. If consciousness exists in this universe then it must
be explained by the laws and constants of this universe. If it's not _neurons
that produce consciousness_ then there must be some other physical phenomena
that generates experience.

~~~
newsmania
> If it's not neurons that produce consciousness then there must be some other
> physical phenomena that generates experience.

Not necessarily. If you think about it, consciousness is not physical. My
thoughts have no mass. My experience of the color red has no height or width.
Why should we assume that something that is not physical must come from The
physical world? I think the best argument is that neuron functioning
correlates with thought. However, as we know, correlation does not necessarily
mean causation.

I think one big hurdle for understanding consciousness is that people still
have a matter – only mindset. I think people need to recognize that
consciousness is not material. Then they will begin to make progress on the
fundamental question, which is how something that is not material can arise
from matter, or whether it comes from another source.

~~~
mikeash
Does a JPEG containing the color red have mass? Are JPEGs non-physical?

~~~
newsmania
I may not understand your point correctly, but I would say yes, they do. The
JPEG is an arrangement of atoms in an SSD. This is different from my
_experience_ of the color red that I see on the screen. The color red is
physical. It is a type of photon with a tiny mass and volume. However my
experience of it has no mass.

~~~
rout39574
No; the JPEG is a pattern.

There are a variety of ways we could recognize the pattern.

\+ The series of bits you start with.

\+ The bitmap on the screen.

\+ the BMP which would also generate that bitmap.

\+ The GIF which results in the same frame.

\+ The emotional impression you take away from it. "That meme I remember so
well"

And so on.

The JPEG is closer to qualia than is the arrangement of bits; closer to "your
experience".

~~~
glenstein
I don't agree with the person you are replying to at all. But I can still
respect their point that there's a distinction to be made (or at least argued
for) that the kinds of explanations we give for JPEGs are ultimately
physicalist explanations. Everything on your list is a different physical
medium a JPEG could be realized on (though maybe some would debate the last
one).

But the person you are replying to probably wants to argue that qualitative
experiences aren't realized on any physical sort of anything, let alone
translatable across various physical mediums.

Now, again, I think that argument is wrong- I think these experiences happen
in our brain, though it's currently unclear exactly how - but I at least
understand their impulse to put those experiences in a different category than
a JPEG, and why the ability to translate a JPEG across different mediums
wouldn't speak to the concern they are raising.

------
sjbase
Interesting that under the Chalmers theory, there's plenty of room for
computers as we know them today to be conscious. That "critical point" where
quanta become qualia, if it's purely a matter of complex arrangements of bits,
could have been reached by now. I'm sure I'm not the first to put this forth,
but maybe your laptop undergoes some form of experience like the "phenomenal
mind." One where it perceives beauty in complex patterns emerging in sequences
of instructions. Or pleasure from a hard computational task executed with low
time complexity.

I suppose that we haven't observed any of the effects of consciousness, i.e.
expression of choice, could be an argument that there is no such consciousness
in a computer. Or, it could just be that computers are conscious but lack any
tools to exert free will. Their consciousness is "read only," so to speak.

~~~
jasonkostempski
Isn't the idea that we even have free will up for debate?

~~~
naasking
The very definition of Free Will is up for debate.

------
runeks
Could it be possible that there are some subjects which, when thought about,
bring us further away from the topic at hand?

For example, if we were to start thinking about the state of no thought — just
pure awareness — we’d understand less about what we’re trying to examine.

Thought is an amazing gift, but it’s limited. It would appear we’re reaching
that limit when we start thinking about consciousness — hence the wildly
varying thoughts on it.

~~~
red75prime
I think it's more about assumptions. For example, Chalmers in his "p-zombie
argument" implicitly assumes that the consciousness in one possible world and
the consciousness in a physically identical but different world are separate
entities.

Thus his argument are about "separable consciousness", and we don't know if
real consciousness is "separable consciousness".

~~~
mannykannot
Ah, the p-zombies: you know they are there, waiting for it to get murky enough
for them to come out (wait a minute... am I mistaking them for p-vampires?)

Has anyone seen a good argument that p-zombies are logically possible, that
goes beyond being some form of "I can string these words together, and they
form a grammatically-correct sentence?"

~~~
goatlover
Chalmers argument is that all physical explanations leave out consciousness,
therefore a p-zombie world is logically possible. You just have a physical
world that's lacking consciousness.

Keep in mind that Chalmers is a property dualist who thinks there is some
additional law of nature binding consciousness with informationally rich
physical processes (not necessary brain activity).

~~~
mannykannot
Yet this article and others seem to be suggesting that the purpose of
p-zombies is to argue against physicalism. It looks rather circular to me,
though I am sure it isn't that obviously so.

~~~
dasil003
Physicalism is also inherently a circular argument. It really gets to the root
of philosophy and metaphysics, which is that at some point you reach the
firmest axioms and base assumptions that can be reasonably supported by
evidence and still there is nothing stopping you from—like a 3-year-old—
asking "why?" just one more time.

~~~
mannykannot
Insofar as either position is an assertion of certainty, that's very
plausible.

------
monktastic1
Manzotti makes good points about how Chalmers' dualist / panpsychist view has
colored the debate and possibly hindered progress, but I was hoping for more
discussion of alternatives. Instead there's only a brief footnote claiming
that physicalism is obviously best.

I myself am fond of Bernardo Kastrup's work on Idealism:
[https://www.reddit.com/r/philosophy/comments/41gb0j/bernardo...](https://www.reddit.com/r/philosophy/comments/41gb0j/bernardo_kastrup_on_why_idealism_is_superior_to/)

~~~
visarga
> I was hoping for more discussion of alternatives

I think the best alternative is Reinforcement Learning. In RL, there is an
agent which exists inside an environment. It can perceive the world around,
move about and perform actions. The agent has a goal to achieve and receives
reward signals from time to time. Such an agent can learn behavior that
maximizes rewards.

That's what consciousness is. It is not an experience, it is a whole loop
"perception -> judgement -> action -> reward" that defines life moment by
moment. This also explains the role of consciousness - it is to choose actions
that lead to survival of the individual and of its genes (reproduction). It is
all a self replicating loop, in the end. The purpose of life is life, the
purpose of consciousness is to guard life.

Consciousness is a set of four functions (perception, evaluating actions,
acting, learning from reward signals) that work together to select the next
action, to protect the body, on which consciousness depends - yep, full
circle.

This whole "RL agent inside environment = consciousness" theory has the
advantage that it is concrete and not supernatural in any way, and has
promising applications (such as AlphaGo and self driving car).

~~~
psyc
You can choose to select a definition of 'consciousness' that does not depend
on experience, yet our experience is still wanting for an explanation.

~~~
visarga
You're making a distinction between "experience" on the one hand, and
perception (as in eyes + vision areas + visual representations), that are
present in RL agents as well. Another component of inner experience is emotion
- which would be analogous to the value function in RL and implemented in the
same way, by neural nets.

------
phaefele
In the qualia / physical world distinction, I wonder if there are good reasons
we see things the way we do.

Take colors: colors map to wavelengths of light. I wonder if there is a good
reason for our perceptions of color to have red be lower frequency and indigo
be higher frequency. I guess what I'm wondering is if our brain mapped these
differently, it would be suboptimal in some way - perhaps the 'mixing' of
colors would work out less 'well' (e.g. red+yellow=orange wouldn't work in the
new layout as well.) If so, then perhaps one could use evolutionary selection
pressures as an explanation to lead us to the qualia we have.

Or have I perhaps just missed the whole point?

~~~
sunir
Indeed! Your visual system has a red-green band and yellow-blue band (cf
Hering theory). The colour wheel is a result of the physical structure of your
visual system. Purple itself is not a spectral colour but a hallucination
caused by red and blue firing simultaneously.

------
danielam
It's important for people to understand the legacy of the Moderns (in this
case, Descartes, though others as well, e.g., Locke) and that legacy's
detrimental hold over contemporary thought. However, while dualism is a
problem, materialism as it is generally understood doesn't escape that legacy.
For example, the problem of qualia (which is basically what this interview
circles around in all by name) is a problem materialism is doomed to languish
in precisely because it clings to an Cartesian account of matter, and this
impoverished view of matter is wholly incapable of accounting for qualia. So
simply dispensing with dualism is not going far enough.

~~~
marojejian
"this impoverished view of matter is wholly incapable of accounting for
qualia"

What approach to accounting for qualia do you prefer?

~~~
danielam
I don't accept the paradigm. The problem of qualia is only an issue if you
assume a Cartesian account of matter (CM) while dropping his immaterial
account of the mind. Descartes assumes that things like color are not
properties of material things but properties of minds. Materialism rejects the
notion of an immaterial mind, but does so without rejecting CM, so you're left
with a reality made of a kind of matter in which color cannot exist.
Materialists are therefore condemned to looking for an impossible solution for
as long as they fail to understand the futility of trying to find things like
color in CM. Materialism is a halfway house between Cartesian dualism and
eliminativism (where the latter denies the existence of things like color
because, having accepted CM, it can't account for them -- Procrustean, to be
sure).

The solution is not to embrace some version of dualism. Dualism has its own
issues. The solution is to embrace a richer metaphysics. Aristotle provides
such a metaphysics.

------
themgt
I don't take quite so pessimistic a view on Chalmers, whose exact ideas I'd
agree are difficult to nail down. Where I do wholly concur with Manzotti is
that "p-zombies" are basically begging the dualist question. I do not believe
true p-zombies who outwardly act identical are possible, and if they were
consciousness would be truly winnowed down to something almost irrelevant.

That said, a lot of Chalmers acclaim has to be related to the inability of any
of the rest of us to truly pose an alternative. The hard problem is hard
because it seems something deep in science and reality itself would have to
give way to come up with a real solution.

~~~
Chathamization
> The hard problem is hard because it seems something deep in science and
> reality itself would have to give way to come up with a real solution.

An alternative is that qualia doesn't real exist (or doesn't exist beyond
being a vague description for emergent properties arising from our cells) and
that "consciousness" is a vague term with no clear definition. It's like if
someone came up with a "hard problem of ghosts," and said that there wasn't
any good alternative to where ghosts come from besides dead people.

It can be difficult to come up with an opposing theory as to why something
exist if you don't believe that it actually exists. This isn't because the
question is hard, but because you find the premise to be completely flawed.

~~~
carapace
Qualia exist. That's how we know that _anything_ exists.

~~~
Chathamization
Yes, this usually gets stated as if it were a fact, but never with any proof.
In fact, proponents of the idea often claim that qualia's existence can't be
detected or proven (the debate here even mentions it).

When a line of argument is based upon the existence of an invisible element
that can't be shown or detected, it starts to veer into the territory of
religion. If there's no evidence beyond "I feel it must be true," its
important to acknowledge that others might feel differently.

~~~
goatlover
It's easy to show that qualia exist. You have a red experience in seeing a
ripe tomato. That redness is not part of the physical description of the
tomatoe's surface, the photons bouncing off it, your eyes, or the neuronal
activity in your nervous system as a result of seeing it. If you think it is,
then you need to explain what it means for a tomato to be red without anyone
seeing it, and how that redness gets into your brain, particularly since the
color would have to be transmitted from tomato surface to electrons in your
visual cortex. It gets worse for the other sensory modalities.

All of that leaves off the red experience. There is no red experience in the
physical world of things anymore than a tomato actually has an objective smell
or taste. Those are all mental and creature dependant (carrion likely smells
and tastes wonderful to vultures but not humans).

Somehow, this is strongly correlated with perception and the brain, but how is
a deep mystery. This isn't to deny the brain or the eye's role in experiencing
red, only that we don't have an explanation for how the red experience is
present, when none of those things (or processes) are objectively red.

~~~
Chathamization
I'm not really sure what your point is. Color certainly can exist outside of
people seeing it; we can have machines detect it. We also understand how our
eyes detect it mechanically (and understand how it can fail, like when people
are color blind). We understand that our eyes connect with our nervous system
and brain and create signals when we encounter red, and that signal leads to
neuronal activity.

You claim that there's some other invisible force at work when this happens,
but what evidence is there to back that up? It seems like the mechanical
description does a pretty accurate job of describing what's happening.
Neuroscientists who have studied this don't seem to be running into any
issues. Not only is there no evidence for qualia, but the idea of qualia
doesn't seem to even solve any particular problem.

~~~
goatlover
> Color certainly can exist outside of people seeing it; we can have machines
> detect it.

Machines detect wavelengths of light. That's not what a color experience is.
Photons aren't literally colored red or green. That's just how we see objects
when the cones in our eyes are excited, sending electrical signals to our
visual cortex.

This is more obvious with smell and taste, since that greatly depends on an
animal's sensory apparatus. Nothing has an objective smell or taste.

> You claim that there's some other invisible force at work when this happens,

But I made no such claim. I'm only pointing out how the problem is hard. I
have no idea what the answer is.

~~~
Chathamization
> Machines detect wavelengths of light. That's not what a color experience is.
> Photons aren't literally colored red or green. That's just how we see
> objects when the cones in our eyes are excited, sending electrical signals
> to our visual cortex.

Right. We detect certain wavelengths, which activate certain receptors, which
send out certain signals, which cause certain neurons to fire. The "color
experience" is the neurons firing and how that interacts with other neuronal
activity. I'm not seeing where the need for qualia is, or what's particularly
hard about this.

~~~
goatlover
So you've adopted an identity view of consciousness, where having a red
experience is identical to the neuronal activity which results from our eyes
detecting light at a certain wavelength.

This is one way to go which possibly solves the problem. But it's not without
difficulty. One is that it's hard to see (pun unintended) how the explanation
for neuronal activity is the same thing as having a red experience.

Second problem is that it makes the physical system of brain activity special.
Searle might be down with that, but it won't help machine detectors have a red
experience. Or maybe even aliens, if they're made of something other than
meat.

It's also hard to see what makes neuronal activity any more special than the
cells in your toes or molecules in a rock. Why can't a rock have a red
experience when interacting with light at that wavelength?

~~~
Chathamization
> One is that it's hard to see (pun unintended) how the explanation for
> neuronal activity is the same thing as having a red experience.

I guess I don't understand why that's hard to see. It seems pretty
straightforward to me. We know that neuronal activity is necessary for the
experience, we know that it coincides with the experience, and we know that
changing how neurons operate can effect the experience. It doesn't seem like a
stretch to say that they are the experience.

> Second problem is that it makes the physical system of brain activity
> special...It's also hard to see what makes neuronal activity any more
> special than the cells in your toes or molecules in a rock. Why can't a rock
> have a red experience when interacting with light at that wavelength?

If we're defining "red experience" as the neuronal activity that humans have
when they see red, your wondering why a rock can't have that? I mean, because
it's a rock that doesn't have a single neuron, let alone a human brain?

But beyond that - are there people that actually argue that brain activity
isn't special? You're not going to have a "red experience" if you have no
brain activity.

~~~
goatlover
> If we're defining "red experience" as the neuronal activity that humans have
> when they see red, your wondering why a rock can't have that? I mean,
> because it's a rock that doesn't have a single neuron, let alone a human
> brain?

Because brains are made up of ordinary matter, just like rocks. If it's the
behavior of neurons that's special, then consciousness is no longer identical
with neurons, it's identical with any physical system that functions the same
way. And then you have the possibility of very counterintuitive arrangements,
like a billion Chinese instantiating a blue experience, or a meteor shower
simulating experiences.

> We know that neuronal activity is necessary for the experience, we know that
> it coincides with the experience, and we know that changing how neurons
> operate can affect the experience. It doesn't seem like a stretch to say
> that they are the experience.

But what is it about neurons that make them experiential? And only some
neurons, because a lot of neuronal activity is not conscious.

~~~
Chathamization
> If it's the behavior of neurons that's special, then consciousness is no
> longer identical with neurons, it's identical with any physical system that
> functions the same way.

Sure, I don't think brain uploads are impossible. If we were having this
conversation in 100 years, I might be saying that "red experience" is the
result of neuronal/circuit activity. But functions the same way is the
relevant part - a rock does not function the same way. I don't think either of
us expect a human to act the same way if their brain is replaced by a rock. On
some level, we both know that the brain is fundamentally different from a
rock.

> And then you have the possibility of very counterintuitive arrangements,
> like a billion Chinese instantiating a blue experience, or a meteor shower
> simulating experiences.

I think we can both agree that a computer is merely a physical object. That
doesn't mean that surfing the web or playing a video game on "a billion
Chinese"/"a meteor shower" is any less counterintuitive. Imagining any
incredibly complex system being completely simulated by random physical
phenomena is bizarre.

And the human brain is much, much more complex than a laptop. Really, go read
up on it - 86 billion neurons, 100 trillion synapses, neurons firing 200 times
a second. Consider the work it takes to simulate one second of brain activity
(with far fewer neurons and synapses than a human brain)[1]:

> The simulation involved 1.73 billion virtual nerve cells connected by 10.4
> trillion synapses and was run on Japan's K computer, which was ranked the
> fastest in the world in 2011.

> It took the Fujitsu-built K about 40 minutes to complete a simulation of one
> second of neuronal network activity in real time, according to Japanese
> research institute RIKEN, which runs the machine.

> The simulation harnessed the power of 82,944 processors on the K computer,
> which is now ranked fourth on the biannual international Top500
> supercomputer standings (China's Tianhe-2 is the fastest now).

You're far more likely to see the dust in the air randomly play Casablanca for
you from start to finish than you are for meteor showers to randomly start
simulating the human brain. The human brain is complex. Really, really, really
complex. So complex that weird emergent stuff like "red experience" happens.
People have a hard time conceptualizing things when they become so vast;
that's understandable. But there's no need to invent invisible qualia simply
because we have a hard time understanding things on this scale.

[1] [https://www.cnet.com/news/fujitsu-supercomputer-
simulates-1-...](https://www.cnet.com/news/fujitsu-supercomputer-
simulates-1-second-of-brain-activity/)

------
ppod
I don't work directly in this field but I have read a lot of the literature.
First of all, here is my opinion, as I have written it in comments on similar
articles here:

\--- "I don't see any advance in this debate since things that Dennett and
Hofstadter wrote 20+ years ago (both independently and in their co-authored
book "The Mind's I"). Is it really surprising that we have a first person
subjective experience? We know that we are incredibly complex things,
constantly integrating and acting on very complicated external stimuli. Such a
system should have references to its own body and its own neural states, its
train of reasoning should frequently include itself, its focus will drift
forward and backwards in time... this is just how a system like this would
work. If the system communicates about its state then its language should have
referents to these internal states, referents like "experience", and "feels
like", and "I understand". Is that surprising? Wouldn't it be surprising if it
wasn't like that? I think that Tononi's approach is a good approximation, but
it can't be a full solution because the word 'consciousness' is too
anthropocentric. One criticism of IIT showed that a seemingly uninteresting
complex artificial system could have a very high IIT complexity quotient. The
problem is that the things we use to define the term 'consciousness' are
things that can be approximated to varying degrees by chimps or dolphins or
generative adversarial networks or antfarms or thermometers. But behind our
use of the word 'consciousness' there is still almost always a very slightly
disguised dualism that uses it as a substitute for the word 'soul'." \---

Therefore, I think I agree with what Manzotti is saying here. Certainly I
think he is spot-on with this:

>Essentially, when Chalmers so dramatically announced “the hard problem,”
insisting that we had no solution to the question of consciousness, he
simultaneously assumed that the constraints governing any enquiry into it were
already well defined and unassailable.

The most frustrating part of the debate is that most educated laypeople and
some professional philosophers will still say with great confidence:
"consciousness is a mystery/we don't know what consciousness is/ we don't
understand how the brain works".

I don't think these statements stack up against Dennett and Hoftstadter's
work, and we have learned an incredible amount about how the brain works in
the last 20 years, both through human neuroscience and computational
neuroscience, and philosophers of the Chalmers type are wilfully ignorant of
it.

~~~
goatlover
I very much doubt that Chalmers is ignorant of anything that Dennett and
Hofstadter are knowledgeable about, given that he's been debating them and
participating in conferences where they give their talks for the past 20
years. What philosophers like Chalmers, Block, McGinn, Searle, and Nagel have
argued is that the neuroscientific and computations explanations cannot
capture subjective experience, because they are fundamentally abstract,
objective explanations. They are abstracted from our collective personal,
subjective experiences, to a mathematical idealization of the world which we
consider to be objective.

That abstract idealization fails to capture the subjective experiences of
color, sound, pain, pleasure, etc, because it's been divorced from them to
gain a God's eye view from nowhere, which has no color, sound, feels, etc. The
problem is very deep philosophical one, which goes to the heart of objectivity
vs subjectivity, and has been around in some form since ancient philosophy,
both East and West.

~~~
ppod
I didn't mean to suggest that Chalmers is ignorant of Dennett and Hofstadter,
just of computational neuroscience. I don't necessarily think that Dennett and
Hofstadters arguments rely on computational neuroscience, but the two
components are complementary, and in combination are fatal to proponents of
the "hard problem".

------
discreteevent
At the end he says that because the idea of consciousness gives man a special
status it is anti-Copernican. But apart from consciousness man already has a
special status. Humans are the only thing we know of in the universe that are
capable of understanding the universe itself.

------
ozy
How can people have theories of consciousness and not include learning. The
brain is constantly predicting futures, thinking multiple steps ahead, and on
surprise, check what past prediction was off, hoping to correct it and learn.

In a way, we are observers, that closely observe ourselves.

------
AndrewKemendo
_Parks: Isn’t this pretty much the same divide made by Galileo and Descartes?
I mean Galileo’s claim that “tastes, smells, colors exist only in the
sensitive body” while “quantities, numbers, and relations” exist in the
physical world._

 _Manzotti: Of course, and Chalmers on a number of occasions has espoused
dualist positions, the idea that the world is divided into separate “realms of
reality.”_

Has there been any substantive new work on the consciousness question? This
feels like someone finding something old and then trying to shoehorn it onto
computing.

 _The idea is to develop more and more complex computers with numbers and
arrangements of connections similar to those of the brain, until consciousness
“emerges.”_

It's not even worth discussing this anymore.

------
pizza
Conscious thought is far more liable to interpretation than the apparently
encrypted nature of unconsciousness, whose contents I'd estimate would elicit
great surprisal in me if I could simply magically gain full access. If it was
capable of a meaningful representation, and if I could consciously analyze
unconsciousness in a way without disturbing the structure of my
unconsciousness because of a kind of self-referential paradox, then the
relative entropy of learning "no, actually _this_ is a more accurate view of
my mind" is something I'd expect to contain a lot of information that could
possibly lead to a new approach of me being me.

Some might say "even if you knew what was inside, it would simply be
meaningless - in fact, the only part of your unconsciousness that you _could_
understand would end up being exactly the same as your conscious experience".
Wittgenstein gave the answer "it would lack any sort of meaningfulness to you
whatsoever" to the question "what would we understand if we could listen to a
lion speak?" It's not unreasonable to assume that a memdump of unconsciousness
would actually provide little insight into itself. If interpretation of
unconsciousness is impossible internally because it has a kind of illegible
presentation from inside your mind, then there might not be a guarantee you
would be able to recast into an internal and understandable grammar just as
result of seeing it appear outside of your head.

Maybe consciousness and unconsciousness are the ends of a spectrum that
measures something like "the degree of sparsity of error-correction (ie.
error-correcting codes) necessary for providing the illusion of singular focus
while recovering information from billions of sources". Maybe there is a
compressive sensing-like sampling method that allows neurons to recognize a
cat through pretty short chains of neurons because of an object recognition
approach that greatly reduces the amount of computation necessary for the
sparsity of that signal..?

Hell, maybe consciousness is even way trippier than that, and emerges from
neural approximations of dynamically reconfigurable circuits that verify zero-
knowledge proofs..

Problem is that none of these explain where and why what is felt feels what it
does, though!

------
rytill
The discussion touches on this a bit near the end. The 'human consciousness is
specifically important' meme could be harmful as we proceed as an evolutionary
entity.

For those who anticipate an AI singularity: it may be harmful to the universe
when we decide on an objective function for our GAI that optimizes
specifically for human experience.

Humans were produced by a process: reality. We really should optimize the
process, if possible, and not just human or human-biased experience.

~~~
jchanimal
Be careful what you wish for. It may be that what is good in the world
(consciousness) is a mere side effect of what makes us effective survivors.
Perhaps we could kick off an evolution of compute-based life, which would
easily out maneuver us for resources, but has no value. Maybe start here for
an idea about why we shouldn't throw certain values under the bus, merely in
order to compete: [http://slatestarcodex.com/2014/07/30/meditations-on-
moloch/](http://slatestarcodex.com/2014/07/30/meditations-on-moloch/)

~~~
AgentME
The book Blindsight by Peter Watts explores the idea that consciousness might
not be necessary for intelligence, and that it may be a temporary stepping
stone in the evolution of an intelligent species if it isn't specifically
safe-guarded. The book posits a superintelligent alien race^Wsomething without
consciousness that views Earth's many broadcasts (which are only valuable to
people with consciousness) as a malicious attack or pollution that only serves
to waste the processing time of intelligences like itself. It's a great book
exploring consciousness and the vast possibility space of intelligence.

------
mcguire
" _Manzotti: ...What matters for us, though, is that in his book The Conscious
Mind: In Search of a Fundamental Theory (1997), Chalmers laid out the terms of
the consciousness debate in a way that simultaneously excited everyone while
more or less guaranteeing that no progress would be made._

" _Parks: Quite an achievement...._ "

That's a pretty decent feat, yes.

------
kaffeemitsahne
Manzotti has a website exhibiting his "spread mind" theory of consciousness:
[http://www.thespreadmind.com/](http://www.thespreadmind.com/)

To be honest I thought it was crackpot-ish nonsense the last time I looked
into it, but I'll give it another go.

~~~
ppod
Ooh I hadn't seen this before.

First impression: gimmicky and just playing with language (as in so much of
modern philosophy), but I prefer it to Nagel or Searle or Chalmers, at least
its fucking consistent:

>Emily sees a red apple floating in the air. However, there is no red apple in
front of her. She is hallucinating. Is this a case of experience without an
object? No. Emily perceives an apple that she met some time earlier in her
life. For instance, she experiences the apple that had been on her table
yesterday. Crucially, she perceives it, she does not re-imagine it!

Fine. That's not really how people use the word "perceive", but if that allows
you to accept how our "consciousness" can be a real-world thing, then it
works. Amusingly, the "spread mind" has a superficial similarity to one of
Chalmers' own ideas, the "extended mind". But I prefer Spread Mind because it
is physicalist and incorporates the obviously correct notion of embodiment
rather than taking the step (which Manzotti correctly diagnoses as insane) of
separating the cognitive and the conscious.

------
carapace
[http://www.nybooks.com/daily/2017/07/20/a-test-for-
conscious...](http://www.nybooks.com/daily/2017/07/20/a-test-for-
consciousness/)

This, linked from the article, is terrific too.

~~~
allenz
The entire series is great: [http://www.nybooks.com/topics/on-
consciousness/](http://www.nybooks.com/topics/on-consciousness/).

If you're interested, there are several high-quality popular philosophy
podcasts as well:
[https://www.reddit.com/r/philosophy/comments/11zcba/the_best...](https://www.reddit.com/r/philosophy/comments/11zcba/the_best_philosophy_podcasts/)

------
cerealbad
a high level parser which receives sense data and does scheduling. stay alive,
avoid pain, seek comfort, exploit opportunities. strain yourself to count 1 to
100 while re-reading this entire text from the beginning and visualise various
shades of blue. you are always performing some x number of n-dimensional-
matrix operations time-sliced from n+1-dimensional-space. i assume for humans
n = 2, and if you attempt to parallelize your thinking you end up doing a type
of merge operation to reduce computational complexity, for x = 3: a = 1 =
cobalt, high = 2 = prussian... cerulean = 92 = cerulean.

