
Roger Penrose on Why Consciousness Does Not Compute - dnetesn
http://nautil.us/issue/47/consciousness/roger-penrose-on-why-consciousness-does-not-compute
======
deepnotderp
I'm honestly not sure why artificial intelligence comes up every time
Penrose's hypothesis is mentioned. The point of artificial intelligence is not
( at least in my and several other prominent AI scientist's such as Andrew
Ng's opinion) to create a _conscious_ intelligence, but to create intelligence
that can do many of the useful tasks that we can do. Whether or not it's
conscious along the way is largely irrelevant.

That being said, I'm not sure why there's quite so much vitriol towards
Penrose and his hypothesis, the leveraging of quantum effects in
photosynthesis and enzymes have been demonstrated and recent studies show that
the sense of smell may also be based upon quantum phenomena. So it's not all
too unreasonable that there might be _something_ quantum going on, even if
it's not Penrose and Hameroff's microtubules.

Another interesting quantum hypothesis in neuroscience is Fisher's:
[https://www.quantamagazine.org/20161102-quantum-
neuroscience...](https://www.quantamagazine.org/20161102-quantum-
neuroscience/)

~~~
bitwize
> The point of artificial intelligence is not ( at least in my and several
> other prominent AI scientist's such as Andrew Ng's opinion) to create a
> conscious intelligence, but to create intelligence that can do many of the
> useful tasks that we can do.

That's only one school of AI ("weak AI"). The other school ("strong AI") says
that it's certainly worthwhile to create an intelligence that is capable of
thought the way humans are -- if only to get a better handle on what "thought"
and "intelligence" really are. Currently weak AI is "winning" because the
surveillance and online-advertising industries can get more use out of it. But
that doesn't put strong AI out of reach, nor render it wholly irrelevant.

~~~
deepnotderp
Currently weak AI is winning, but you forget that weak AI has always
historically won. The annals of history ( as in the last ~100 years :P ) are
littered with field attempts at finding "consciousness" and mixing
neuroscience with philosophy with the occasional mathematician and once in a
blue moon a CS...

My skepticism comes less from a hatred of "Strong AI" and more from the fact
that it has been promised over and over again with no results to show for it.
In addition, every time "weak AI" makes progress, there's always someone who
writes an article bashing the progress of "weak AI" and saying "it isn't _REAL
AI_ (tm)".

This is kind of similar to the reason I dislike neuromorphic architectures,
you can't just assume that you're right and look down upon all others, you
need to show results if you want to do that.

~~~
lukas099
>My skepticism comes less from a hatred of "Strong AI" and more from the fact
that it has been promised over and over again with no results to show for it.

Could that just be because Strong AI is much, much more difficult to achieve
than Weak AI? And not that it is less valuable an end to pursue than Weak AI?

In other words, does the value of one vs. the other necessarily correlate to
our previous success in one vs. the other?

------
dvt
> This past March, when I called Penrose in Oxford, he explained that his
> interest in consciousness goes back to his discovery of Gödel’s
> incompleteness theorem while he was a graduate student at Cambridge. Gödel’s
> theorem, you may recall, shows that certain claims in mathematics are true
> but cannot be proven. “This, to me, was an absolutely stunning revelation,”
> he said. “It told me that whatever is going on in our understanding is not
> computational.”

This is a very strange conclusion to make. Maybe someone can elucidate.
Godel's theorems apply to tightly-controlled _formal_ systems. And they do
_not_ , in fact, apply to particularly weak systems (e.g. sentential logic).
Why would Penrose think that Godel has anything to do with consciousness? All
it seems to have done is prove that, in some systems, there are known
unknowables (specifically, that the consistency of sufficiently-complex
systems cannot be proved).

If anything, it should lead us to take a train of thought similar to
Chalmers': consciousness is unknowable (maybe kind of like God), but even
that's a stretch. Because, like I mentioned above, Godel's theorems are about
formal systems. Not only is the real world not a formal system, but (at least
on the quantum level), it's also non-deterministic. Now, there are
probabilistic logics out there that follow Godel's findings, but there's a lot
of work that needs to be done to bring that in the real world.

~~~
HONEST_ANNIE
Indeed. There is nothing to indicate that human brains have done anything that
can't be achieved with polynomial time heuristic search algorithm.

Most people who have read the "Emperors New Mind" have been surprised that
Penrose don't' seem to realize that. He has very strong intuition about
consciousness and cognition but he can't explain it to others.

~~~
sgt101
>polynomial time heuristic search algorithm.

Like create culture, write the works of Shakespeare or create a workable
theory of the mind of others? Or fall in love, teach a child to play cricket
or understand (and act on) an Opera?

Also why should I care a heckin' heck about a heuristic time polynomial
search? I'll work out complexity when I have bounds on correctness... not
before.

The argument that those that find something beyond formalism must formalise it
in order to be regarded as serious is not a serious argument!

~~~
aeorgnoieang
> The argument that those that find something beyond formalism must formalise
> it in order to be regarded as serious is not a serious argument!

Eh; waving your hands around and claiming, without much evidence, that of
course there's "something beyond formalism" isn't a "serious argument" either.

It seems like you, being presented with The Mystery of Consciousness, and
having the option to either explain, worship, or ignore it, are opting to
worship it. Hence:

> > polynomial time heuristic search algorithm. > > Like create culture, write
> the works of Shakespeare or create a workable theory of the mind of others?
> Or fall in love, teach a child to play cricket or understand (and act on) an
> Opera?

~~~
sgt101
Am I wrong that incompleteness demonstrates that there are systems that cannot
be completely described in formal terms?

~~~
Elrac
I'm no expert, but yes, I think you're wrong about that.

This explanation [https://www.scientificamerican.com/article/what-is-godels-
th...](https://www.scientificamerican.com/article/what-is-godels-theorem/), a
little closer to layperson level, uses integer arithmetic as a simple example.
Peano's axioms completely describe integer arithmetic - easy peasy! What Gödel
says is that there are, nevertheless, statements about results in this system
that cannot be proved true (or false).

The problem appears to be not describing the system but proving every possible
conjecture about it.

~~~
blackflame7000
There are certain logical traps (paradoxes) than cannot be formalized by a
computer. For example, given the statement, "This sentence is False", it is
not possible to deduce a boolean value describing the sentence. Thus there is
some property of the sentence that is not describable to a purely logical
system. Human consciousness allow us to spot the paradox where as a polynomial
search algorithm could not.

~~~
dTal
Formalizing or describing something is not the same as "deducing a boolean
value". But I can ask GNU Maxima to give me a list of all numbers that satisfy
the equation x+1=x-1, and it will happily tell me that there is no such
number. Maxima doesn't support self-referential propositional logic, but a
solver that can identify that no truth value can support "this sentence is
false" doesn't need to do anything more mystical than test both cases - no
"polynomial search algorithm" required.

You are suggesting that "human consciousness" is a separate, ineffable thing
from a "purely logical system", but I don't see any reason a computer couldn't
do what your brain is doing. You can't tell me the truth value of the sentence
either.

------
dzdt
For an intelligent response to Penrose see Scott Aaronson :
[http://www.scottaaronson.com/blog/?p=2756](http://www.scottaaronson.com/blog/?p=2756).
Aaronson recognizes Penrose's genius while still disagreeing vigorously with
his out-of-the-mainstream ideas.

 _One of the many reasons I admire Roger is that, out of all the AI skeptics
on earth, he’s virtually the only one who’s actually tried to meet this
burden, as I understand it! He, nearly alone, did what I think all AI skeptics
should do, which is: suggest some actual physical property of the brain that,
if present, would make it qualitatively different from all existing computers,
in the sense of violating the Church-Turing Thesis. Indeed, he’s one of the
few AI skeptics who even understands what meeting this burden would entail:
that you can’t do it with the physics we already know, that some new
ingredient is necessary._

~~~
ABCLAW
Aaronson spends far too little time using the real cannons he's leveling at
Penrose. He is attempting to down a tree by focusing snipping roots
individually instead of just pushing the rotting trunk over.

Penrose's argument is hollow. We understand the biophysics behind how the
brain works. They aren't complicated at the level of detail you need to
understand how the system works. We understand how neurons interact with each
other. The evidence is consistent not only with our well settled understanding
of chemical and biological systems, but also increasingly consistent with our
development of information systems at scale.

The real gap is whether or not the totality of 'consciousness' is really just
neural interactions at scale + starting state data, but the more we learn, the
more that mystery vanishes. We understand the low-level perceptor->analysis
models much better now, and we can map perceptor inputs at scale to outcomes
in model tuning. In short, the consciousness of the gaps is rapidly losing his
hiding spots.

Penrose's argument is taken seriously because we have collectively created a
tremendous philosophical and institutional infrastructure around the idea of
free will and the theory he attacks strongly implies there is some level of
determinism in our cognitive systems. Since he is irreproachable on a personal
or intellectual peerage level, he is a fantastic champion of this counter-
cultural perspective.

However, if we apply even the barest of epistemological tools to the issue, we
rapidly recognize that even if Penrose is correct, the chance of his position
being accurate is so remote and so unverifiable so as to be useless.

But the existence of a counter-argument against the deterministic mind itself,
absent of its validity, is itself useful. It allows us to collectively hem and
haw before changing our views and institutions to fit our understanding of how
people work. Which means Penrose's argument is not going away anytime soon.

~~~
sgt101
Hello

>However, if we apply even the barest of epistemological tools to the issue,
we rapidly recognize that even if Penrose is correct, the chance of his
position being accurate is so remote and so unverifiable so as to be useless.

You are saying that if he is right then it's probably not accurate?

>But the existence of a counter-argument against the deterministic mind
itself, absent of its validity, is itself useful. It allows us to collectively
hem and haw before changing our views and institutions to fit our
understanding of how people work. Which means Penrose's argument is not going
away anytime soon.

if the mind is deterministic how can we change our views - how can the
position be useful? Things are, and you will, or will not.

~~~
ABCLAW
>You are saying that if he is right then it's probably not accurate?

No, I'm saying its overwhelmingly unlikely that he's right, and that even if
we're agnostic about which reasonable epistemological framework we use,
there's SO much evidence against his perspective that it isn't important.

>if the mind is deterministic how can we change our views - how can the
position be useful? Things are, and you will, or will not.

I've never understood this line of reasoning. If my perceptor is the output of
a baynesian evaluation that uses information as an input, receiving new
information may or may not change the output.

How can a (meat)machine learning algorithm ever change from it's starting
state? Well, it is provided more cycles, changes state, and accordingly
changes output. This doesn't mean that a machine learning algo needs to be
non-deterministic and non-verifiable in order to change from state to state.

As for the utility argument, I think there's a tremendous amount of utility to
be gained from understanding how consciousness actually works, even if it is
the we-are-bio-robots outcome.

~~~
sgt101
we-are-bio-robots = 0 utility.. as... why bother!

------
mcguire
" _[Penrose] explained that his interest in consciousness goes back to his
discovery of Gödel’s incompleteness theorem while he was a graduate student at
Cambridge. Gödel’s theorem, you may recall, shows that certain claims in
mathematics are true but cannot be proven. “This, to me, was an absolutely
stunning revelation,” he said. “It told me that whatever is going on in our
understanding is not computational.”_ "

Penrose starts from a specific conclusion, that formal systems are limited in
ways that _he_ clearly is not, and then searched for some way to explain the
difference.

~~~
drostie
That's kind of the _modus operandi_ of physics, I suppose. Max Planck started
from a specific conclusion, that the spectrum of the Sun is limited at high
frequencies in ways that Maxwell's theory clearly is not, and then searched
for some way to explain the difference.

~~~
Sean1708
The difference being that Planck knew what the reality was (we knew what the
spectrum of the sun looked like), but with consciousness we don't really know
what the reality is (there is no "spectrum" of the mind for us to match).

~~~
lodi
Okay how about Einstein then? My understanding is that he also worked
backwards in order to develop his theory of Special Relativity; he was trying
to "fit" a theory to a universe that presupposed that the laws of physics are
the same throughout the universe, including a constant speed of light for all
observers. Relativistic effects weren't known to be reality at the time this
theory was proposed. Much later, scientists developed experiments to confirm
relativity.

Maybe we're still searching for the right theory of consciousness to know what
exactly we're trying to measure?

~~~
some_guy_there
>Relativistic effects weren't known to be reality at the time this theory was
proposed.

Umnn, Michelson–Morley and Maxwell equations were enough experimental evidence
before Einstein. Einstein work was reconciling Mechanics to confirm with
electromagnetism, and not the other way round.

EDIT: Not to say there isn't _some_ physics theory developed independent of
experiments. But relativity is not a good candidate.

------
Chronos
A while ago I wrote up my objections to the Penrose-Lucas argument:
[https://chronos-tachyon.net/essays/penrose-objections/](https://chronos-
tachyon.net/essays/penrose-objections/) . I'm not super proud of how it turned
out (way too meandering), but it boils down to:

1\. Let's suppose for sake of argument that humans really can see the inherent
truth of "Peano Arithmetic is consistent". That doesn't mean humans violate
Gödel's Incompleteness Theorem: it could just mean that humans use axioms
stronger than PA.

2\. Gödel's Incompleteness Theorem only applies to systems that are perfectly
logically consistent. Not sure how Penrose didn't notice, but humans...
aren't.

3\. When scientists proposed Quantum Mechanics as a replacement for Classical
Mechanics, it was on them to explain how Quantum Mechanics simplified to
Classical Mechanics in the common case. "Penrose Mechanics" is an even more
radical departure — especially from a physics of computation standpoint, as
Penrose Mechanics by definition would allow solving at least some of the
problems in (ALL - R) in ~polynomial time. Penrose needs to explain how
Penrose Mechanics reduces to Quantum Mechanics in the common case.

4\. Penrose proposes that (a) there exist new physics, (b) that evolution has
learned to computationally exploit the new physics via microtubules, and yet
(c) that humans are the only lineage to make use of this feature of
microtubules, even though microtubules are found in all eukaryotic cells (from
mushrooms to amoebae). From a predator-prey standpoint alone, it would
seemingly be a _huge_ evolutionary advantage to be able to compute NP or R
functions in polynomial time. (That ability is not _strictly_ implied by
Penrose Mechanics, but it's a very likely consequence.) Penrose needs to
explain why only humans are taking advantage of the computational power of
microtubules, when microtubules have existed for billions of years and across
millions of species.

~~~
dilemma
>2\. Gödel's Incompleteness Theorem only applies to systems that are perfectly
logically consistent. Not sure how Penrose didn't notice, but humans...
aren't.

Why are humans not logically consistent then, if they are as materialists
claim, something that can be abstracted with a computer program if we have
full information of their workings?

~~~
Chronos
Um, you seem to be operating on a confusion of ideas. Materialism does not
imply that humans are logically consistent. The _universe_ in which we exist
is (probably?) a logically consistent system, but that's true for both
materialism and non-materialism. The difference between the two is which set
of rules the universe runs on, not whether those rule sets are internally
consistent.

When I say "systems that are perfectly logically consistent" and "humans...
aren't", I'm saying that _the ideas humans have in their heads_ are not
logically consistent. It's possible to write down "2+2=5" on a piece of paper,
even if 2 plus 2 doesn't actually equal 5, and it's likewise possible for
humans to believe "2+2=5" even if 2 plus 2 doesn't equal 5.

------
Animats
All the mammals have roughly the same brain architecture and the same DNA.
Whatever makes brains work is present at the mouse level in some form. We
really ought to be able to build a mouse brain by now. A mouse brain has about
75 million neurons. That's not a big number for modern hardware. If we knew
what to build, it would probably fit in a 1U rack.

Some years ago I met Rodney Brooks, back when he was doing insect robots. He
was talking about a jump to human-level AI as his next project. I asked him,
"Why not go for a mouse next? That might be within reach." He said "I don't
want to go down in history as the man who created the world's greatest robot
mouse." He went off to do Cog [1], a humanlike robot head that didn't do much.
Then he backed off from human-level AI, did vacuum cleaners with insect-level
smarts, and made some real money.

[1]
[https://en.wikipedia.org/wiki/Cog_(project)](https://en.wikipedia.org/wiki/Cog_\(project\))

~~~
sgt101
Yes, but one mammal has created a technological civilization.

There are no mice on HackerNews. I think.

~~~
abvdasker
_Yes, but one mammal has created a technological civilization._

We did, but is that really categorically different than groups of primates
using primitive tools [1]?

I think the idea that humans are categorically different from other species is
misguided. Instead consciousness and intelligence seem to be more continuous
than discreet, particularly when looking at semi-intelligent animals like
monkeys, dolphins and octopi. Animals in that class can all learn pretty
complicated tasks and are able to make use of tools. Self-awareness and
consciousness isn't something we understand fully enough to even exclude all
animals from possessing.

The only thing that seems particularly unique about humans is our ability to
use complex language and record it. Passing down knowledge from one generation
to the next is the _only_ reason we have "technological civilization".

[1] [http://www.bbc.com/earth/story/20150818-chimps-living-in-
the...](http://www.bbc.com/earth/story/20150818-chimps-living-in-the-stone-
age)

------
deepnet
For the completely opposite view, listen to Daniel Dennet on The Life
Scientific [1].

Dennet argues that combining Darwin's strange inversion of reason (complexity
from bottom up iterative refinement) with Turing's Universal Machine provides
a way of understanding how we are machines, built of mahcines, built of
machines, etc. and it is the heirarchy that allows the complexity of minds to
emerge.

That heirarchical iterative schemes are unexpectedly powerful is well mirrored
by the recent successes of deep neural nets, and Dennet cites Hinton.

It's worth a listen and summarises his new book From Bacteria to Bach.

[1]
[http://www.bbc.co.uk/programmes/b08kv3y4](http://www.bbc.co.uk/programmes/b08kv3y4)

~~~
sgt101
Yeah, but we are made of The Universe, and yet The Universe _is_ and we can't
say why. So I don't get where Dennet's argument goes.

I don't think he does either.

[http://www.newyorker.com/magazine/2017/03/27/daniel-
dennetts...](http://www.newyorker.com/magazine/2017/03/27/daniel-dennetts-
science-of-the-soul)

~~~
abiox
> yet The Universe is and we can't say why

is there a 'why'? some people say it was 'created', but then the creator 'is
and we can't say why'.

~~~
deepnet
"Perhaps the best way of seeing the reality, indeed the ubiquity in Nature, of
reasons is to reflect on the different meanings of “why.”

The English word is equivocal, and the main ambiguity is marked by a familiar
pair of substitute phrases: what for? and how come?”

“Why are you handing me your camera?” asks what are you doing this for?

“Why does ice float?” asks how come: what it is about the way ice forms that
makes it lower density than liquid water?"

P48, From Bacteria to Bach, The Evolution of Minds by Dan Dennet

------
darawk
Can anyone here defend this theory? I'm genuinely curious. It may well be the
case that human consciousness relies on quantum effects. But we know that we
can simulate a quantum computer using a regular computer. Which means that in
principle, you don't need QM to have consciousness, even if human
consciousness makes use of QM.

So, while it may or may not be true that the brain uses QM, it doesn't seem to
really explain anything of interest. It doesn't make consciousness any less
mysterious, or give any real insight into how we might create or understand
our own consciousness.

Given that (or refute the premises, if you please), why is this theory
interesting, relevant, or correct?

~~~
ankostis
Penrsoe explicitly excludes Quantum Computing from the basis of consciousness.
Unfortunately I didn't find the exact interview where we was stating this, but
he was clear on this:

Quantum Computers are computers after all and he is talking about non-
computable physics.

The reasons he looks at Quantum it's because it seems to be missing something
fro our understanding(the "Reduction of the Unitary evolution"), and he
"hopes" that this is non-computable.

Does that make sense?

~~~
darawk
Sort of. But fundamentally, if there is a quantum phenomenon that our brains
are harnessing, then so too can a quantum computer. So I don't see how he can
draw such a distinction, even in theory.

------
d--b
> And for all the recent advances in neurobiology, we seem no closer to
> solving the mind-brain problem than we were a century ago

Er... Maybe not consciousness-brain (although I'm no expert on this at all),
but it's hard to dispute that we have a MUCH deeper understanding of movement-
brain, perception-brain, memory-brain, problem-solving-brain relationships
that we had 100 years ago.

Consciousness is a touchy subject. We've been studying cancer for a long time
and haven't found a cure for it either. Yet, no one thinks that cancer
originates in quantum effects. "we're no closer to solving it" is not a good
argument in favor of one theory over another.

~~~
inputcoffee
It depends on what you mean by "relationship"

Consider that we know "where" things happen, but it doesn't follow we know
"how" things happen.

It is a bit like opening up a computer and taking a thermometer and measuring
the heat kicked off by various parts.

Now I can tell you that when I run MatLab, the part called "CPU" gets hot. And
when I run games the part called "GPU" gets hot. So clearly the CPU is the
Matlab part of the computer and the GPU is the games part of the computer.

What we need is a theory of software before we can make progress, and that is
what we lack.

~~~
SubiculumCode
That is a much more obtuse view of the state of neuroscience than is deserved.
Sure, it started out as, oh this part of the brain needs more oxygen when
remembering something, and oh, if that part of the brain is removed then
people cant remember much . But it progresses to, ah this part of the brain is
important to memory that binds multiple features, not within objects,but
between objects. oh, computational models suggest pattern separation and
completion operations that allow orthoganalization of activity during encoding
of similar memories and retrieval of all the pattern of activity from
reactivation of part of the activity. So lets make an experiement that
modulates similarity between memories and modulates the completeness of
retrieval cues for bound percept pairs, and/or in animal models directly
modulate activity in the sub parts of that brain region thought to be
reponsible for separation or completion operations to see if we can impair or
enhance separation or completion operations. Anyway, thats just my field.

Edit: Adding some references. Not sure how to format it into a list.

Pattern Separation and Completion in Dentate Gyrus and CA3 in the Hippocampus:
[http://science.sciencemag.org/content/315/5814/961](http://science.sciencemag.org/content/315/5814/961)
[http://www.sciencedirect.com/science/article/pii/S0149763412...](http://www.sciencedirect.com/science/article/pii/S0149763412001674)
[http://www.sciencedirect.com/science/article/pii/S0896627313...](http://www.sciencedirect.com/science/article/pii/S0896627313010854)
[http://science.sciencemag.org/content/319/5870/1640](http://science.sciencemag.org/content/319/5870/1640)
[https://www.ncbi.nlm.nih.gov/pubmed/26190832](https://www.ncbi.nlm.nih.gov/pubmed/26190832)
[https://scholar.google.com/scholar?q=pattern+separation+comp...](https://scholar.google.com/scholar?q=pattern+separation+completion&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ved=0ahUKEwiQ36iK4tbTAhUC6GMKHXVuAXYQgQMIJDAA)

Relational Binding:
[http://www.sciencedirect.com/science/article/pii/S0166432813...](http://www.sciencedirect.com/science/article/pii/S0166432813003094)
[http://psycnet.apa.org/journals/neu/29/1/126/](http://psycnet.apa.org/journals/neu/29/1/126/)
[http://journals.sagepub.com/doi/abs/10.1177/0963721410368805](http://journals.sagepub.com/doi/abs/10.1177/0963721410368805)

~~~
ColanR
Honestly, it just sounds like the field has managed to go from taking a big
blob ('cpu'/'gpu') and finding basic relationships ('matlab vs games') to
working with much smaller blobs that are closer to the base principles at
work.

Illustrating from your example, current state of the art seems to have managed
to break the big blobs into smaller blobs (e.g., now we're looking at 'memory
that binds multiple features between objects'), and then found more complex
relationships ('looking at separation or completion operations in similar
memories').

That still doesn't tell us how the actual programming works. We barely
understand the role of dendritic spines, and we're still trying to get a
handle on the utter complexity of single neuron interactions in the neocortex.
He might have oversimplified, but I don't think he's wrong.

(Just playing devil's advocate. Would love to know how badly I misunderstood.)

Edit: nearly all of your links are paywalled.

~~~
SubiculumCode
Oh sure there is some blobology to it still...but the nature of the function
attributed is becoming more precise.--not a memory blob, a blob associated
with signal orthoganalization. But that is just the functional neuroimaging in
humans in which we are working downwards, while neuroscientists using methods
that influence activity at the neural level, or even the dendritic level, work
upwards. Each informs the other. For example, the computational models of
pattern separation and completion are derived by observations of the structure
of connections between neurons, which led to hypotheses about what kinds of
operation it could support, which led to tests of that hypothesis in rats, and
eventually in humans. But at the heart of it, I don't think we have to be
completely reductionistic to gain understanding. We don't have to fully
understand in every detail how dendrites operate to understand that a
collection of neurons can support a particular data transformation.

------
RichardHeart
Things that think they're special and important outperform things that don't.
The side effect of that is that they justify their specialness with things
that are hard to understand, an appeal to complexity. In this case, the
complexity is invented. There is no hard problem of consciousness. What we do
isn't so magical, or special, from what animals, or machines, or other actors
with agencies do.

Consciousness is rare and beautiful, however, not magic.

------
strainer
There is something so special about being concious - something precious and
undefinable, that I would be surprised if an intractable connection to reality
where not intrinsic to it.

I would be surprised if the programmed representations of things which I have
long played and worked with, the symbols, numbers, vectors and virtual
objects; could actually have some conciousness of their own - as though the
only likely difference between simulation and reality is just a matter of
scale and/or perspective.

------
lordnacho
I don't get how Gödel leads to quantum consciousness. Even if you need quantum
to make consciousness, doesn't that just mean we're all quantum computers
instead of classical chemical computers?

If we're all made of Lego, but we need Technics sticks and joints to become
conscious, how is that any less materialistic? Even the fact that quantum is a
source of randomness doesn't seem essential.

~~~
sgt101
There may be different classes of quantum computers in the sense that some
algorithms - including algorithms we do not yet have yet - may not execute on
all the quantum computing surfaces we have. Additionally current quantum
computers are heroic devices, it may be beyond our civilization to build ones
that are equivalent to a human brain - or that elucidate how any brain works.

------
bra-ket
Related:

[https://en.wikipedia.org/wiki/Quantum_cognition](https://en.wikipedia.org/wiki/Quantum_cognition)

[https://en.wikipedia.org/wiki/Holonomic_brain_theory](https://en.wikipedia.org/wiki/Holonomic_brain_theory)

------
stupidcar
Orch-OR isn't very convincing as a theory, but if you just read it as sci-fi,
it's incredibly entertaining. So many weird, quasi-spiritual ideas.

~~~
skdotdan
I agree. I would love to read a sci-fi novel on this topic.

~~~
runT1ME
Anathem by Neal Stephenson is basically hard sci-fi directly inspired by
Penrose's book "The Emperor's New Mind". Also one of my favorite books. :)

~~~
skdotdan
Thanks!

------
eosophos
Glad to see Ayahuasca get at least a passing mention here.

Let anyone who thinks they understand consciousness inhale >18mg of N,N DMT
and return humbled.

------
csomar
Does that mean that animals don't feel pain? Or that they are conscious? I
don't mean to tarnish his research with a simple fact but I guess it's
something worth exploring. Do animals feel pain the same way humans do? Is
there a measure of consciousness?

~~~
sgt101
There's a measure of consciousness - if you accept that other adult humans are
conscious then at some point babies move from not capable of conscious to
capable of conscious (maybe 20 weeks?). If you accept that then go and play
with a dog and reflect on awareness/awakeness/agency and compare with a baby.

------
inputcoffee
The Penrose Fallacy:

1\. We don't understand Quantum Mechanics

2\. We don't understand the mind

3\. Therefore QM explains the mind

Or as Steve Harnad once said, he takes all the embarrassments and failings of
one field and marries them with another.

~~~
CuriouslyC
Except that quantum indeterminacy provides a reasonable mechanism for free
will, and entanglement provides a reasonable mechanism for the fact that a
bunch of disconnected neurons produce one mind. In fact, in both cases they
are the only possible mechanisms that don't involve completely new physics.

~~~
inputcoffee
I would love to see a survey of the group of people who find Penrose
compelling.

Anecdotally, I find the people drawn to his view are largely physicists. And
the people who scorn him are largely "cognitive scientists". (I am closer to
the second group than the former).

What you find to be "reasonable mechanisms", I find to be completely
unreasonable. No, I don't think entanglement is a "reasonable mechanism" for
"the fact that a bunch of disconnected neurons produce one mind."

I don't even know why one would think that quantum entanglement is even
vaguely relevant to the "problem", and I think it has something to do with a
misunderstanding of the problem.

Do you think that there is a problem of mind in the form of "a bunch of
disconnected neurons produce one mind"?

~~~
SomeStupidPoint
You don't actually give any reasons.

 _Why_ do you find it unreasonable that entanglement would be involved with
integrating signals from disparate regions or several neurons?

The problem is how we integrate the signals from many regions in to whatever
is generating the single perception (or perhaps several parallel perceptions
-- I don't _know_ that there aren't other experiences coincident with mine,
just that I have access to one of them).

Unless you're postulating that a single neuron at a time is responsible for my
subjective experience, then you _do_ need to explain how several neurons are
generating a single signal.

My experience with cognitive scientists is that they simply punt on the
problem, completely failing to address how the signal is amalgamated in to a
single stream of experience even as they talk about what regions contribute
features of it.

~~~
inputcoffee
Because that isn't the problem. It is not a correct or good statement of a
problem.

Let me put it this way, suppose there was a mechanism, call it X, that
explains to your satisfaction how many different neurons can "integrate the
signals from many regions".

Does that explain to you why physical neurons create a subjective experience?
Would you now consider the "problem of consciousness" to be solved?

Leaving "consciousness" aside, there is no issue at all of course. We have a
straightforward computational theory of how different neurons can integrate
their output and produce a computation.

The problem is that this idea of "integrating" signals is not clearly laid
out. If it is not the computational problem, then what sort of problem is it?

Koch and Crick have also proposed "a mechanism" for something like the co-
ordination of various neurons to explain visual awareness. A frequency at
which all the neurons "cohere." (Koch is also a real genius, by the way.
[https://profiles.nlm.nih.gov/ps/access/SCBCFD.pdf](https://profiles.nlm.nih.gov/ps/access/SCBCFD.pdf)
)

Again, the problem with their proposal is that it is not really a "mechanism."
Suppose all the neurons that recognize a face happen to be vibrating at 40 hz,
Does that "explain" "consiousness?"

I lean towards the philosophers. Read one good critique by Fodor, and you
realize that the questions are poorly formed.

~~~
SomeStupidPoint
It proposes a way by which those neurons can create subjective experience,
yes. Namely, that the medium they're creating the signal in is fundamentally
experiential and perturbations in it are fundamentally experiences of some
variety. There are lots of philosophical reasons to think that this is the
case for _some_ aspect of reality (or perhaps fields in general, even).

This would give us a science of experience and allow us to categorize and
create new experiential beings. That's a goal that many of us have.

Now, for various reasons, we might suppose a medium we're already aware of (or
mediums in some combination -- as is the position of the materialist). Then
the question becomes entirely about the integration mechanism, as that's the
only piece of the puzzle we don't know.

So if you're a materialist, the integration of brain patterns in to a signal
which corresponds to subjective experience is _the_ question of creating new
'souls' (in the sense of beings who experience), and also of categorizing what
exactly has a 'soul' (in the sense of inner experience).

The problem is how the computational structure which produces our behavior
couples to our experience of it, which is a consistent, evolving perturbation
in _something_ , even if just in the sense of being a quasiparticle formed of
the constituent parts interacting. (Though, likely, we're missing parts of the
computational story -- I don't think most serious scientists would argue
that.)

I think you just don't like the question, but it's pretty clearly formed, at
least as far as big research directions ever are.

~~~
inputcoffee
I am sorry but you lose me here.

"It proposes a way by which those neurons can create subjective experience,
yes"

I just don't see it.

Suppose you somehow show me that various neurons contain particles that are
entangled. Let's say I believe you.

So now how do they create subjective experience?

I see blue because my neurons are entangled? I see a flower because my neurons
are entangled?

What is "the medium that they are creating"? How are you even using these
words.

If I knew that my neurons have entangled particles, I would know no more about
consciousness than I do now.

~~~
SomeStupidPoint
Particles don't create a medium, they're constructed out of perturbations in a
medium, eg fields. Your paraphrase in quotes isn't _anything_ like what I
actually said in my post -- you dropped key words from it.

What I actually said, and you misunderstood: "the medium they're creating the
signal in". Your paraphrase is so far from that, I have trouble even
addressing the confusion. It's particularly egregious that you omitted key
words when you're clearly using them to make passive aggressive comments about
_my_ understanding.

The answer to how this addresses the question is that the medium _is_
experiential -- anything that creates effects in it is 'experiencing'.
_Experiencing_ seeing blue corresponds to a particular pattern of activity in
the medium. (Though, there is no default 'experience of seeing blue' \-- it's
possible that we experience different ones, different animals have different
ones, etc. This relates back to the computational structure.)

How entanglement is involved is that there's a single 'unified' experience
when we know that different portions are generated in different brain regions,
but don't (obviously) converge the information to a point (eg, it's always
spread at least in a cluster of neurons), which suggests that we're talking
about something non-localized in the medium, ie, not a point object.

The source of a lot of non-locality is through entanglement, so it's
reasonable to conjecture that it's involved here, as well -- though, of
course, it could be a different mechanism.

If the pieces of your brain which are generating the perturbation in the
experiential medium(s) are entangled, it could provide a model of how it's
carrying a non-localized piece of information -- your experience.

~~~
aninhumer
>The answer to how this addresses the question is that the medium is
experiential -- anything that creates effects in it is 'experiencing'.
Experiencing seeing blue corresponds to a particular pattern of activity in
the medium.

But if your theory is that consciousness arises as a result of particular
patterns in a medium, couldn't that medium equally well be "the universe"? Why
do you see it as necessary for the medium to be entangled? Why couldn't
consciousness just be the result of patterns in non-localised particles?

~~~
SomeStupidPoint
The medium isn't entangled, particles in it are (with each other), creating a
larger non-localized object out of the smaller, (more) localized particles.

Entanglement _is_ a mechanism by which a single, non-localized value can be
carried by pieces (none of which themselves carry that value).

And hence the materialist proposal here is precisely what you propose: that
the universe is fundamentally experiential (or at least parts of it that
brains interact with), and the single, non-localized experience is created and
carried via entanglement of the constituent "particles", which are themselves
impacted by the regional computation of the brain (eg, by knotting with the
"main" knot carrying the experience, in analogy to a TQC), thus allowing the
disparate regions to contribute to a single experience with characteristics
picked up by regional contributions.

Proposing that it's _not_ entanglement means proposing a new phyical process
by which that information can contribute to a single, non-local value, which
people researching the brain haven't even taken a stab at.

Which is why I find their proposal that it's _not_ entanglement to be strange
-- they seem to blithely ignore the physics/informatiom theory implications of
that.

~~~
marcosdumay
> Proposing that it's not entanglement means proposing a new phyical process
> by which that information can contribute to a single, non-local value, which
> people researching the brain haven't even taken a stab at.

I've never seen a brain researcher claiming to have seen any non-locality. It
would be the kind of thing everybody would be talking about. Mostly, people
seem happy to accept many speed-of-light delays on something that is
centimeters wide and counts time in dozens of milliseconds.

You want a way to keep coherence overall through the brain, but there's no
reason to think that coherence is needed.

~~~
SomeStupidPoint
Except that researchers also haven't seen the information converging to a
single point, which should be obvious if it's happening.

I agree that the computing is done with moving charges and chemical
propagation (at substantially below the speed of light), but that doesn't
account for how the information gets integrated in to a single signal for
experience. But we also see that kind of structure in, eg, the proposal to
build a TQC, where you build a computation in to non-local structures built
out of entangled particles by moving around charges at substantially lower
than the speed of light.

If it's not being amalgamated in to a point object (which doesn't seem to be
the case -- there's no 'you' point in the brain), then it must be being
amalgamated to a non-local object.

------
prmph
Basically, "solving" consciousness means being able to answer this question:
(it is stated in simple terms, but interesting to grapple with nonetheless)

Say a loved one died. Let assume technology has progressed to the point that
before he died, his brain and body was recorded in exact detail and
replicated.

Would you accept an simulation or instantiation of this brain/body as a
"resurrection" of your loved one?

If not, then consciousness is not simply a matter computation

~~~
drvdevd
I like to pose the question subjectively: what would it be like to have my own
consciousness suddenly operating within a computer instead of my body, _right
now_? Would the machine have to simulate my bodily experience to give my mind
a frame of reference?

I would say once I can go back and forth and _know_ what it's like, then and
possibly only then, I could accept such a resurrection.

I'm curious about what Penrose would say on this perspective.

~~~
sgt101
Ok - imagine you are completely paralysed, or blind, or deaf, or all of the
above.

Are you dead?

People do, and have experienced a there and back from this..

[https://en.wikipedia.org/wiki/Kill_Bill](https://en.wikipedia.org/wiki/Kill_Bill)

(also sleepin' dreamin')

------
goatlover
A simple argument or why consciousness doesn't compute which doesn't rely on
QM, is that computation is a form of abstraction from subjective experience,
as are all explanations and models. To achieve objectivity, we divorce our
creature specific and individual experiences to arrive at patterns common
across experiences, and we call those real.

To go beyond that is to make a metaphysical argument that computation,
information, or math makes up reality (as opposed to just things in the case
of materialism, or ideas in the case of idealism, or the unknowable noumena in
the case of Kant).

