
Can computers become conscious?: My reply to Roger Penrose - yoha
http://www.scottaaronson.com/blog/?p=2756
======
e12e
My initial thought about this is to reformulate the question as: "Can humans
become conscious?". I don't think most people would claim that sperm and egg
cells are concious. I also don't expect many people to argue that all humans
come from joining these two, non-concious elements. Most (but maybe not all!)
would probably agree that that embryo, consisting of a handful of cells, is
not concious.

Yet, we interact with people all the time, and at least some of those we
consider to be concious. So, _by definition_ the answer to the question is
yes. But I think that exploring _why_ that is so, can also shed some light on
what we mean by "being concious", how one might demonstrate that something is
(or isn't) concious -- and how much of being concious relates to being able to
relate to other that we consider to be concious as well.

If we could (and was willing) to raise a child without _any_ human contact,
what would such a thing become? Would it really become something we would call
human? Concious?

I think that going down this line of questioning, we'll find some ideas about
emergent systems, part-whole relations and in general a lot of complexity.

I think it also hard to claim that if we can merge a sperm and egg cell, and
create a new concious system, that we couldn't be able to create some
artificial (meaning, constructed by us, not "found" in nature) system that
develops into being concious.

INMHO, the controversy isn't if an artificial construct can become concious,
but what conciousness is, and how we know it's there.

~~~
sandworm101
Consciousness is no bright line rule. Humans like to believe that we are on
one side of a great divide between conscious and unconscious animals. But all
there is is a gradient between us and the bacteria, with infinite stops in
between.

The interesting twist in Enders Game (and other scifi) is that the alien
invaders do not understand humans as conscious beings. I see no reason to
believe that there aren't beings on earth, animals, who are farther up the
spectrum than we. There are some very large brains out there that we do not
understand.

~~~
e12e
There are many interesting lines of reasoning around this. The idea of Gaia as
a concious system is one. If the earth thinks at the speed of one thought
every year, or 100 years -- and perhaps communicates with other such beings --
how would we know? (I don't have strong feelings about concrete ideas about
Gaia as a concious system, one way or the other -- other than that I find it
an interesting idea, and a good storytelling tool)

~~~
meric
A pair of conjoined twins can read each other's thoughts and see through each
other's eyes. Are they two people, or one?

[http://www.dailymail.co.uk/news/article-1331769/Doctors-
stun...](http://www.dailymail.co.uk/news/article-1331769/Doctors-stunned-
conjoined-twins-share-brain-thoughts.html)

A person has left and right hemispheres in their brain. Under some
circumstances, losing one of these hemispheres will result in an almost normal
functional human being. Imagine, with advances in biology and medicine, one
day we can separate one person's brain, and place one half in one body, and
the other half in another (artificial body or one whose original owner became
brain dead), and both functioning as people. Are they two people, or one?

I think it depends on the observer. Some people would choose to see (one or
both cases) as two people, and some would choose to see them as one.

But let us go back into history. The universe started from the big bang, as a
big expanding cloud of hot gases. It was _one_ point, and then it became _one_
cloud. Billions of years and lots of things happening later, bits of that
cloud grew eyes, legs and started to think for themselves. Are those
individual bits of the universe, many individuals, or one universe?

I think, both. It's like asking a network of neurons, are you one, or many?

The _universe_ is a conscious system. It can see itself through many eyes. It
sees different parts of itself. It is having many thoughts.

I, a shard of the universe, have replied to you, another shard of the
universe, by typing this on yet another shard of the same universe, at the
same time as my own neurons are communicating to each other.

~~~
language
I think the metaphor I would point to here is "whirlpools in a river."
Movements of movements of movements ... ad infinitum.

Another interesting way of articulating this might be: "The universe splits
itself into two parts - an observer and an observed."

~~~
arethuza
As an aside, in the Xeelee novels there is an intelligent species that are
formed from vortices in liquids...

------
gpderetta
The most interesting definition I have seen of consciousness is [1] "the mind
thinking about the mind thinking", i.e. some sort of meta-level "diagnostic
process".

What I find interesting is that this definition supports awareness and an the
existence of an 'I', but doesn't imply any sort of control or direct influence
on the unconscious mind for which there seem to be experimental evidence
against. Instead the result of this (literal) 'reflection' would be fed back
to the unconscious mind as a sensorial input as any other.

Such a reflective process could possibly arise evolutionarily as it is a way
to improve the efficiency of the thought processes.

I know absolutely nothing about neural network, but I wonder what would happen
if the time series of the internal signals of a NN was fed to another NN and
its output back into the first NN. Probably nothing stable at all and training
would be 'interesting'.

Another interesting thing is that consciousness would be 'just' a process and
not necessary for intelligence to arise. Peter Watts in 'Blindsight' [2]
explores this last point in depth.

[1] Unfortunately I can't remember were I read this reference nor I have been
able to track it.

[2] An extremely good first contact hard scifi novel, can be read for free
from the author website.

~~~
mh-cx
Not really a definition, but my very personal take on what conciousness is:

Evolution has brought about creatures that are more successful (in regard to
survival and reproduction), the more predictive power they have about their
environment[1]. One could say, the better a species can simulate it's
environment, the more likely it will continue to exist and be part of the
ongoing evolution. Our brains are of course also just products of this
evolutionary process and meanwhile very good at simulating their environment.

Now if we think about Plato's cave we could say, that all our experience is
just an illusion. It's all just a projection of reality, that our brain
creates. I'd go a step further and say: Experience or qualia is what the
process of simulation of reality feels like. Or to put it in another way:
Conciousness _is_ the process of simulation of reality. The "better" the
simulation or the more complete and closer to "reality" it is (whatever that
means) - the higher the form of conciousness.

Taking this definition one must also concede that even animals and "lower"
life forms have some degree of conciousness. It probably feels like something
to be a streptococcus.

Of course all this is just blind speculation - but it somehow feels _right_ to
me.

[1] [https://www.quantamagazine.org/20151119-life-is-
information-...](https://www.quantamagazine.org/20151119-life-is-information-
adami/)

~~~
Xcelerate
> Conciousness is the process of simulation of reality. The "better" the
> simulation or the more complete and closer to "reality" it is (whatever that
> means) - the higher the form of conciousness.

So would you say that a supercomputer in the process of calculating the
gyromagnetic ratio of an isolated electron to 12 digits of accuracy is a
higher form of consciousness than we are? After all, it is performing a much
better simulation of reality than our brains in that case.

~~~
mh-cx
No, because it only simulates a very small part of reality. For example it
could never estimate the weather for the next 2 hours by looking at a
formation of heavy clouds.

~~~
mh-cx
Dreaming is really an interesting case. I'm not sure if I'd agree that it's a
weak form of consciousness. It's maybe just some sort of play that the
simulator does when it's not used for the outside world. It wildly tries out
new things. If you consider that it has a whole physical and cultural universe
at it's disposal, it's not really a surprise that this playing creates very
bizarre stories sometimes.

By doing so we may check for complex cross connections and gain new insights.
Actually I vividly remember waking up from dreams and suddenly realizing that
I have a crush on a girl. I didn't really the day before (or was not aware of
it) but during dreaming my brain seemed to say "I've checked all information
that we have about this specimen and my simulations so far look promising!"

I sometimes also dream of deceased persons. The brain maybe again and again
tries to make sense of this deeply unsettling experience and to somehow
integrate that unexplainable into the simulation.

(BTW sorry for misspelling consciousness all the time in my post above - it
was a little too late here)

~~~
lttlrck
Really interesting. Imagination/visualization would be enabled by this
simulation. For example I can visualize what is obscured by a wall.

------
maxander
This was an unexpectedly interesting talk, considering that from the framing I
expected it to consist of essentially "Penrose, everyone thinks the Godelian
thing is stupid, please just stop." Instead we got a theory ( _very_
theoretical, but still interesting) relating consciousness to the cosmological
boundaries of the universe.

If only I understood it! Does anyone else know how to interpret the sentence
"in order for you to be unpredictable and unclonable, someone else’s ignorance
of your causal antecedents would have to extend all the way back to ignorance
about the initial state of the universe"? That and the following couple of
paragraphs seem to establish something really interesting, but I haven't been
able to follow it.

~~~
di4na
The idea seems quite simple. If you want to be able to tell that you can't
decide what the person is doing, you need to lack something about his previous
state.

Which means that even if you know everything that happened between the initial
state of the universe and now, you still can't predict that person decision.
So you would have to ignore some things about the whole initial state of the
universe.

Because the only possible things that can explain you not being able to
predict that person are : 1) The decision is completely non causal... Which
destroy a lot of things, because a non causal decision having impact would
destroy the chain of causality. 2) Or! There is something in the initial state
of the universe that you can't know. Hence the person acts are unpredictable
but still causal.

------
Animats
I've suggested before that AI focus on animal-level AI (operating in the real
world) for a while and STFU about "consciousness". Blithering about
consciousness doesn't seem to lead anywhere.

Penrose is strange. He once sued a toilet-paper manufacturer for supposedly
copying his Penrose non-repeating tiling scheme for the embossing pattern on
their toilet paper. He lost.

~~~
Zikes
I always assumed that AI consciousness would be an emergent property, not a
feature we would (or even could) effectively develop towards.

~~~
ArkyBeagle
Chomsky shrugs that human consciousness - or at least language; good luck
separating those - is an emergent property.

So yeah.

~~~
rspeer
That seems downright _nuanced_ for Chomsky talking about language. Did he
really say this?

Chomsky considers language to be a thing that physically evolved in humans.
Are eyes an emergent property?

~~~
ArkyBeagle
"Did he really say this?" That is my interpretation of what he'd said -
there's no obvious evolutionary/natural selection ... impetus to it. Doesn't
that make it necessarily emergent?

Eyes on the front are common to nearly all predators. I think the contrast
between octopus eyes and human eyes is used frequently to deduce that eyes are
indeed emergent.

But we have nothing on the raptors.

~~~
rspeer
It's just, calling language emergent sounds close to the view that you have to
whisper furtively when you get too close to MIT: that maybe there was no
evolutionary event that created language at all, it's just a really good
meme[1] that makes use of other faculties that already existed in the human
brain.

Maybe universal grammar isn't a physical fact about the possible ways the
human brain can communicate, it's just that if we observe 6,000 variations on
an idea with the same[2] origin and try hard enough to sum them up, we'll
succeed.

[1] In the original sense.

[2] Or similar origins. Maybe language is a good enough idea to arise multiple
times and converge, much like eyes. And I'm not sure whether to count things
like Nicaraguan Sign Language as a separate origin of language.

------
mariodiana
Have some fun and google the following:

    
    
        technological analogies for the human brain in history 
    

For the last 30 or 40 years, the brain has been "a computer." Prior to that,
it was: (insert latest and greatest technology).

I'm sure we're on to something, but I think it best we take it all with a
grain of salt.

~~~
andrewla
The difference between then and now is that we have the Church-Turing Thesis,
and the less precise Universal Church-Turing Thesis.

A violation of the UCTT would be a Big Deal.

Because of this, we now know that previous models of the human brain (as a
hydraulic system, as a mechanical system of differential gears, as a weaving
machine) are, in fact, also capable of expressing all computable functions
(and we can even add Minesweeper and Conway's Game of Life).

I don't think anyone is saying that the brain is directly analogous to a
computer (in the sense that it has a CPU and volatile memory or anything) so
much as the fact that a computer, sufficiently powerful, could compute the
same things that our brain is capable of computing, because of Church-Turing.

~~~
visarga
The only question remains - does it feel like something to be a computer? If
the computer has the ability to sense like us, to reason like us, to form
concepts, to imagine and to judge and act according to its goals, is this
enough to consider it conscious?

~~~
tbabb
Yes.

------
bitL
Could we be missing something fundamental in our understanding of the world?
Like when nobody knew radioactivity until 20th century and until then it was
all green-glowing magic deep down in the mines?

Currently the paradigm of our epoch is to look at everything as a computing
problem. Before it was hydraulics, then electricity etc. Maybe in 100 years
our successors will be dealing with a more complete picture and their children
will be laughing at schools that we thought consciousness was related to
computing?

~~~
mablap
Yes, absolutely. But that's the trick, we don't know :)

------
aidenn0
> And so my friends predict that we might face choices like, do we want to ban
> or tightly control AI research, because it could lead to our sidelining or
> extermination? Ironically, a skeptical view, like Penrose’s, would suggest
> that AI research can proceed full speed ahead, because there’s not such a
> danger!

I don't think runaway general AIs must be conscious in order to be a danger,
but that might depend on specific definitions of consciousness.

------
andrewla
> Look, clearly we’re machines governed by the laws of physics. We’re
> computers made of meat, as Marvin Minsky put it.

This really seem inescapable to me. The remaining arguments all seem like
smokescreens. Being able to copy computer programs? Reproducing the running
state of even a moderately complex distributed system is effectively
impossible, so this argument bothers me not at all.

Even if there is some quantum gravitational aspect of human consciousness, the
idea that you'd be able to distinguish between that and a reasonable
computational approximation of it seems far-fetched at best.

That said, I feel that we are extremely far away from producing a computer-
based consciousness. Even human consciousness needs a couple of years of full-
speed processing to reach the point at which it could even vaguely be
considered conscious by adult standards. Most likely 5-10 years after we give
up on the notion of creating an AI we'll discover that we created one 15 years
ago but just haven't yet been able to recognize it as an emergent property of
a hugely complex system.

~~~
Animats
_" Reproducing the running state of even a moderately complex distributed
system is effectively impossible, so this argument bothers me not at all."_

That's a feature of the Xen hypervisor.[1] You can even migrate a process from
one machine to another without stopping it.

[1]
[http://wiki.prgmr.com/mediawiki/index.php/Chapter_9:_Xen_Mig...](http://wiki.prgmr.com/mediawiki/index.php/Chapter_9:_Xen_Migration)

~~~
scott_s
You linked to the migration of a single operating system kernel. That is very
different from reproducing the running state of an entire _distributed
system_. Migrating a single kernel is not easy, but straight-forward: you have
access to all of the relevant pieces (in memory process state, and state on
disks) in one place. With a distributed system, that is not so, since the the
running state of a distributed system will include information in-flight, some
of which is not virtualized.

It _is_ possible to migrate the state of a distributed system by designing a
protocol such that all of the components stop sending information around, so
that you can be sure you did not miss anything. But I think that violates the
spirit of the phrase "running state", since you essentially stopped the
system. It's also not necessarily so that post-migration, the system will act
the same.

~~~
teraflop
You might be interested in the Chandy-Lamport snapshot algorithm, which can do
what you describe without stopping the system's execution.

~~~
scott_s
I'm aware of such things, and in fact I work on a system that supports
something very similar. (Paper to be published by my colleagues in the
industry track of VLDB.) But that is a _logical_ snapshot. Through a clever
protocol, and assuming some characteristics of your transport, you can save a
state across your system that is logically consistent. That is, if you
restored to this logical state, the distributed application can resume from
that point, and maintain integrity. You may, however, lose information from
your system that was computed after you took your snapshot. The state is
logically consistent, but unlikely to be representative of any actual point in
time.

~~~
teraflop
Strictly speaking you're correct -- a distributed snapshot doesn't capture the
state of the system at a single instant. But there's a sense in which the
nondeterminism introduced by the snapshot is no worse than the nondeterminism
that is intrinsic to a distributed system.

To be more precise: any "possible outcome" of the snapshot was also possible
at some point during the original execution. And likewise, any "impossible
outcome" of the snapshot was also ruled out at some point in the original. So
if viewed from the outside, the fact that you took and restored a snapshot is
indistinguishable from the normal situation of having an asynchronous network.
It can't cause the system to exhibit any behaviors that weren't otherwise
allowable.

(Apologies if I'm just restating what you already know; other readers may not
be aware of it. This might be the same property that you're calling
"integrity".)

This is reminiscent of one of Scott Aaronson's comments from the original
article. If quantum effects are significant in the brain, then it's impossible
-- even in principle -- to "perfectly" copy it such that the original and the
copy are guaranteed to behave identically. But if the original and the copy
are _statistically_ indistinguishable, it amounts to the same thing.

------
jbob2000
One thing Julian Jaynes discusses in his book Origin of Consciousness in the
Breakdown of the Bicameral Mind (highly recommend, but a hard read) is that
consciousness is comprised of a number of different qualities, one being
_excerptification_ \- the ability to take a number of indiscriminate
ideas/memories/images and string together a narrative.

For example, I can take the idea of a narwhal horn and the idea of a horse and
mix them together to create a unicorn. My conscious mind took the _excerpt_ of
a narwhal horn and strung it together with the _excerpt_ of a horse body. And
I didn't just stick the horn on some random part of the horse's body, there
was a contextual understanding behind it - a narwhal horn belongs on the
forehead, the head of a horse is the thing with eyes and a nose.

Can a computer do that? No way, not presently at least. There's waaayy too
much going on in the process of excerptification for a computer to replicate
_just this one_ facet of our consciousness.

~~~
agumonkey
Computers are close to that. The neural net image backward inference is
similar. They take bits of imagery and try to retrofit it in 'meaningful'
ways, blending both source and target image. Also genetic programming can
model swapping and blending traits.

~~~
jbob2000
Meh, not really, the computer doesn't _understand_ the images it's mixing,
it's just using some clever algorithms to piece together an image. It has no
awareness of the images you're giving it. I couldn't give it an image of a
narwhal and an image of a horse and expect it to give me a unicorn.

~~~
modeless
> I couldn't give it an image of a narwhal and an image of a horse and expect
> it to give me a unicorn.

You absolutely could with a generative adversarial network and the right
training dataset. They can interpolate between images in a semantic way, and
add and subtract components of the semantic representation. While it's not
narwhals and horses, here's an example of using images of men with and without
glasses along with a woman without glasses to generate a woman with glasses:
[https://raw.githubusercontent.com/Newmu/dcgan_code/master/im...](https://raw.githubusercontent.com/Newmu/dcgan_code/master/images/faces_arithmetic_collage.png)

More results here:
[https://github.com/Newmu/dcgan_code](https://github.com/Newmu/dcgan_code) And
note that this field is advancing so rapidly that these results have already
been improved on despite being only six months old.

~~~
argonaut
jbob2000 makes the error of being too vague (didn't define awareness). You're
making the error of being too specific. Under no suitable definition of
awareness do GANs exhibit awareness. They are a very specific and narrow
statistical tool used to solve a specific problem they are trained for
specifically.

------
spdegabrielle
To quote @worrydream "Worrying about sentient AI as the ice caps melt is like
standing on the tracks as the train rushes in, worrying about being hit by
lightning"

~~~
donuteaters
There are plenty of people panicking about climate change... Perhaps other
people are allowed to think about other things. Or would you replace ALL of
scientific study with the single topic you consider most important?

In 20 years or so, if it turns out that all of the dire predictions were
significantly overblown, would you at least have the courtesy to blush in
embarrassment for the current alarmism?

------
Zigurd
I don't see a reason for AIs to be similar to human intellects. They are not
animals. They are not "alive" the way an animal is. They cannot die as all
animals do. They have no genes. They did not evolve. They don't have brains
that evolved to keep hairless chimps alive. They don't have a "jump away from
snakes" circuit. They won't have evolved our heuristics that make us suck at
probability and risk assessment. When people use the word "consciousness" they
mean usually mean the part of their brain that claims to have introspection
but in fact cannot access the subconscious agents that make decisions in the
human brain. Why should an AI have any of that?

~~~
drabiega
Well we might find, for instance, that in order to create a general purpose AI
which can develop intellectually quickly enough to be useful we have to make
it sufficiently like us that it can understand and participate in our culture.

~~~
Zigurd
It's easy to see the attraction of making an AI that is relatable to humans.
Some people will claim that, if we are able to build an AI, that it isn't a
real intelligence because it does not have our foibles. But I think making a
human-like AI is both difficult and dangerous. One would need to understand
what one is trying to simulate, and some aspects, like the drive for
propagating genes, could create unnecessary hazards.

------
sharemywin
The only thing I could argue is a speed component. Something would need to
work at a reasonable operating speed for humans to judge it as conscious. If
an answer took ten years to get back I doubt anyone would call it conscious.

~~~
willismichael
It's possible that some form of life exists that operates on a significantly
different time scale than we do, let's say it takes 10,000 years just to
communicate a greeting, or a microsecond to do the same. We likely wouldn't
recognize it as conscious (or even life), and it possibly wouldn't recognize
us either.

~~~
visarga
If the whole ecosystem of our planet would be a sentient entity, thinking
slowly over the eons, then the human species would be a thought in its head.

------
ThomPete
Computers won't become conscious anymore than the person in Searles chinese
room understand chinese or the individual neuron in my brain understand the
thoughts it's passing along.

But the system which the computer is a part of though might create
consciousness.

I think a lot of our problems with thinking about consciousness is that we
think of it as a thing that exist some specific place rather than as a
component in a feedback loop.

What we fundamentally are, are pattern recognizing feedback loops, on top of
pattern recognizing feedback loops.

~~~
kirrent
You may say that, but I've literally never met anyone who thought that
consciousness in a computer might arise in any other way than how you've
described.

~~~
kowdermeister
We haven't met, but I think the same way as the guy who you replied. Being a
FL is a bit of oversimplification, but the general idea might be an accurate
description of what goes on in your brain.

------
ikeboy
>To see why, I’d like to point to one empirical thing about the brain that
currently separates it from any existing computer program. Namely, we know how
to copy a computer program. We know how to rerun it with different initial
conditions but everything else the same. We know how to transfer it from one
substrate to another. With the brain, we don’t know how to do any of those
things.

What about DRM? We can copy programs because we make them or we know
specifications of the original computer it was meant to be run on or we can
guess (because there aren't that many architectures, etc.)

What if a DRM scheme didn't need to run on modern architectures, was written
in new programming languages from machine language and up, on entirely new
paradigms? Do you think it would be easy to copy? What if it was unethical and
illegal to do anything that could stop the program, or break the hardware?
Would it even be surprising that we couldn't emulate it?

Even today's DRM schemes have not all been broken, and not all programs can be
run on other substrates. The ones that have been reverse engineered is because
they're written for architectures that are already known, and there's just
specifics that need to be figured out. (Or so is the impression I get from
reverse engineering discussions).

------
di4na
I find interesting that noone answer the real question about the so called
Singularity : Does being counscious really change anything ?

Couldn't we have a profoundly efficient, intelligent and organised AI
(dangerous or not, i would follow mister Asimov on the fact that we are going
back to the Frankenstein complex) that could do all that Singularity thingy
_without_ counsciousness?

What if counsciousness was just... something completely different?

------
YeGoblynQueenne
>> “give me a big enough computer and the relevant initial conditions, and
I’ll simulate the brain atom-by-atom.”

Nuh-huh. You can't keep blaming it all on the hardware. Every year there's
bigger and bigger computers and more and more people who claim they'll do
better next year, when there's bigger computers.

Well, it just ain't gonna happen. Fascinating intelligence that eats our most
powerful artificial one for breakfast is all around us, and it runs on as
little as five ganglia. Look at insects. They don't even have brains. They may
not know calculus, they may be completely incapable of even acquiring the
concept of a mathematical operation, but they are autonomous and viable in a
harsh, hostile world that the most powerful machines we've built are
completely incapable of inhabiting.

If recreating insect-level intelligence artificially was just a matter of
computing power, we 'd have done it already: we got more than enough of it.
Five ganglia? My calculator is more powerful than that! But we don't have a
single machine that can do what the lowliest fruit fly can do: survive on its
own, and thrive, in the real world [1].

So- why is human-level intelligence just a matter of more computing power, but
insect-level one is not? Is there some kind of secret ingredient? Is there
some kind of magic involved? Will some sort of -gasp- singularity occur when
the number of gates on a processor reaches a critical mass? Will the machine
suddenly acquire a S.O.U.L. [2] when we stuff enough processors in a single
casing and plug it in to the mains?

I don't believe in magic and I don't see how this whole thing is supposed to
work: "Just add more computing power". And then? What? What is the plan?

______________

[1] Actually, these things can, for limited periods, but nobody thinks they
portend the coming of strong AI:
[https://www.youtube.com/watch?v=MYGJ9jrbpvg](https://www.youtube.com/watch?v=MYGJ9jrbpvg)
\- why not?

[2] Self-Optimising Unbeatable Logic

~~~
hueving
>lowliest fruit fly can do: survive on its own, and thrive, in the real world
[1].

That's not related to intelligence. That's other components of the fruit fly
(reproductive system, etc). AI research is not really focused on creating
systems that consume resources to do nothing but reproduce.

~~~
YeGoblynQueenne
>> AI research is not really focused on creating systems that consume
resources to do nothing but reproduce.

Yeah, here's a little paper I had to peruse lately for a uni assignment (on
neural networks, of all things):

Bio-Benchmarking of Electronic Nose Sensors
[[http://journals.plos.org/plosone/article?id=10.1371/journal....](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0006406)]

It's a comparison of a Metal Oxide "electronic nose" and the olfactory
receptors of a ...fruit fly. Results: the MOx sensors are nowhere near the
fly's antennae in discriminatory power. The insect's neural system seems to be
extremely fine-tuned to discriminate between an impossibly broad range of
odourants, with overlapping signals in odorant space. The MOx sensors are
rubbish at discriminating between the different odours for exactly that reason
(that the odours blend with each other and leave you with so much noise). The
MOx receptors even get damaged if the odorant signal is too strong. The fruit
fly? It doesn't care.

Here's a bit from the paper, that's really enlightening:

 _We also note that the nature of the fly 's broad and overlapping sensor
fields would seem to require sophisticated and powerful neural processing for
classification of odorant signals. There is anatomical, experimental and
computational evidence that the insect does have such a system dedicated to
olfaction [22], [23]. In contrast, it is known that the chemosensory system of
the nematode Caenorhabditis elegans, has very few interneurons and synapses
between sensory and effector units [24], [25]. Given its deficit in neural
processing power, we expect that the nematode approaches one-to-one matching
between chemoreceptors and odorants, with less reliance on combinatorial
processing than the fly. This would imply that nematode chemoreceptors have
very tight tuning curves compared with either fly ORs or MOx sensors and may
account for the remarkably large number of chemoreceptor genes identified in
the C. elegans genome [26]. Whilst replicating the nematode model in an
electronic nose may theoretically be feasible, it would present daunting
engineering challenges to develop and deploy a very large number of tightly
tuned and independent sensors._

So much about "systems that consume resources to do nothing but reproduce".

Oh yeah, AI is too interested in them, yes indeedy-oh. If we could make _one_
measly computer that could find its way around the real world with the
purposefulness and accuracy of a mere nematode worm, we would be happy. No,
scratch the "we". _I_ would be happy. The mother of AI they'd call me, you
know? It'd be Nobel this and Turing award that and oh, here's a honorary
readership to add to your collection, ma'am.

I mean you have to be joking, right? We're not interested in reproducing the
cognitive abilities of biological systems? That's like, the definition of AI.

------
marlag
If we managed to model exactly how a rat thinks, behaves and we create a
little fury rat robot that runs around being all rat-like, doing all things
real rats do to the degree that the robot is accepted by the local rat
community, is that not a system that would be considered equal in complexity
to a rat?

Do we have enough information to create a rat? I think so. Is that an A.I.? To
be rat-like you have to be able to adopt to new environments, learn new
things.

Could we also model in the same way a human baby? Ok, that's perhaps a bit too
hard. What if we try to create a model of a mongoloid baby. It would still be
considered intelligent, right? Is it an A.I.?

Why does a system need conciousness in order to be able to dominate its
environment? To me, clonable robotic rats would be as much of an inconvenience
as real rats are.

------
sickbeard
My definition of conscious is being aware of what you are doing and
influencing it.

Can a computer running a file server realize it is only storing pirated movies
and stop it it without being explicitly programmed to do so? A conscious
computer could.

~~~
visarga
> Can a computer running a file server realize it is only storing pirated
> movies and stop it it without being explicitly programmed to do so? A
> conscious computer could.

Can a person from 1975 know your hard drive was full of pirated content?
Probably he knows no title from your library, so he could not assume one way
or another. It's as if specific "programming" was missing from him, or people
from 1975 aren't passing your consciousness test.

~~~
lgas
As a data point, I'm from 1975 and I can tell if you're hard drive is full of
pirated software (but I don't care).

------
ArkyBeagle
I was so looking forward to "The Emperor's New Mind" and all it was was
skyhooking quantum effects into being "the soul".

Even if it's true how could you tell?

$DEITY bless Roger Penrose nonetheless.

------
epx
How useful would be a conscious computer? It would have to be protected by
some rights (at the very least, equivalent to animal rights), they cease to be
machines. The narrow-task AI efforts make more sense from an economical point
of view, and I think they have actually more disruptive consequences (massive
job extinction etc.).

That said, I believe that consciousness is "supernatural", something like
another state of matter of a quantum phenomenon that cannot be manipulated
like an Excel spreadsheet.

~~~
Raphmedia
> equivalent to animal rights

Look at chickens. Piled on one another, slaughtered at will. We would simply
stack conscious computers on one another and then delete them when they are
not needed.

------
rspeer
When Marvin Minsky described one perspective on consciousness as "hey, we
don't understand consciousness, and we don't understand gravity, maybe
consciousness is made of gravity", I thought he was just mocking the ever-
popular idea that "maybe consciousness is made of quantum mechanics".

I didn't know that there really was a prominent thinker who postulated gravity
( _and_ quantum mechanics) as the place where subjective consciousness is
hiding.

~~~
DonaldFisk
I don't think anyone's seriously put forward the idea that consciousness and
quantum mechanics are related because they're both mysterious, so that is a
straw man. In any case, we do understand gravity and quantum mechanics pretty
well, at least compared to consciousness.

The reason some people think QM and consciousness are related is that some
interpretations of quantum mechanics requires a conscious observer to collapse
the wave function and make the final measurement (e.g. by taking a reading
from a measuring apparatus, or listening to someone else (who they can't be
sure is conscious) telling you the measurement over lunch. I'm not familiar
with Penrose's argument re gravity, but so far general relativity has resisted
attempts to quantize it, so maybe he's saying that GR and QM must be
incorporated into a more general theory yet to be worked out, like electricity
and magnetism are incorporated into Maxwell's theory of electromagnetism.

~~~
rspeer
It's a straw man, sure. It's an exaggeration about people being willingly
baffled by physics.

May I ask, where did you get the part about "conscious observer"? Yes, there's
a thing in QM about observation collapsing the wave function, and there's
disagreement on what that actually means. But I can't imagine that an
interpretation promoted by actual physicists would use a squishy term like
"conscious" in their hard science.

~~~
Strilanc
'Consciousness-causes-collapse' is also known as the 'Von Neumann-Wigner
interpretation' [1]. Yes, THAT Von Neumann.

It's got no explanatory power over other interpretations, obviously. But
actual physicists demonstrably used squishy terms like "conscious" when
talking about their hard science. Embarrassing, but true.

1:
[https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Wigner_int...](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Wigner_interpretation)

~~~
DonaldFisk
And indeed THAT Wigner.

At some point a measurement is made, and the only way you can be sure of the
result is by consciously experiencing it. And until you ask your colleague,
her quantum state is entangled with whatever is being measured, and with the
measuring apparatus.

I don't see why it's embarrassing about physicists and other scientists using
"squishy" terms about things they know (or even suspect) are there but can't
explain. Magnetism was just as squishy a term until it was understood. The
only difference is that consciousness seems to be exclusively a first-person
experience, but that arguably makes it even more real. (You might be a brain
in a vat, in which case everything you sense would be bogus, but your
experience of it would still be real.)

------
kazinator
The Many Worlds hypothesis provides a partial answer to some issues of
consciousness, via kind of solipsism: "I am the only mind which exists: _in my
fold_ of the Many Worlds complex". This reconciles the problem of why
everyone's consciousness is pinned to oneself: what causes your consciousness
to be confined to yourself and inaccessible to me and vice versa. Basically,
there is no hidden context variable for that; in fact, the universe in which
you're conscious is devoid of my consciousness. (Plain solipsism consists of
_denying_ that there is any other consciousness anywhere, which is very
different, and has entirely different ethical/moral implications.) The
hypothesis is essentially that the subjective phenomenon of consciousness is
something of which a single instance arises in any universe. That's okay; we
have multiple parallel universes, and therefore multiple instances of
consciousness. A variation on the Anthropic Principle then explains why "you
are you and not someone else". (If you were someone else, you might still be
asking that question---as that someone else.)

------
pessimizer
Consciousness is a word searching for a meaning. It's telling that this entire
thread is people explaining their own personal definition of this reification
that we use to separate ourselves from the animals.

Consciousness is the state of being awake, hence able to react normally to
sensory stimulation. Computers are nothing but consciousness, unless they're
freeing memory.

~~~
yarou
> Consciousness is the state of being awake, hence able to react normally to
> sensory stimulation.

And how'd you go about verifying that? I know that I'm conscious, but you
could be a p-zombie. That means if I poke you with a toothpick, you will
scream in agony (responding to stimulus), but that doesn't give me any
information about whether or not you are conscious.

~~~
meric
We're all p-zombies. When we get poked with toothpicks we will scream in
agony, the thought of being poked with toothpicks will appear internally, and
we will in turn react to that thought, the reaction based on our upbringing.
We have our outer senses, sight, touch, and then our inner senses, that can
sense the 'thoughts' that were triggered by outer sense stimulation. The
thought is just another form of sensory stimulation, but it is internally
generated, indirectly based on an external source. Our reaction to those
thoughts depends on how we've seen other people react to similar situations,
that have generated those thoughts in us.

~~~
yarou
I'm not that comfortable with the hand wavy physicalist view that
consciousness arises from the physical processes of the brain.

If that were the case, why can't I simply design an artificial brain, a cyber
brain, and upload myself in 1000 bodies?

What really spoke to me in Aaronson's entire lecture is the whole quantum no-
cloning restriction. Bell State can't help you either, because you can't
violate causality.

Suppose for a second that you could violate the laws of physics in U', a
universe where anything goes. You can even have free energy too, i.e. no
increasing entropy. I would still argue that creating uncountably infinite
copies of yourself will still yield only the original conscious, while the
rest are merely simulations.

FWIW, I don't think "microtubules" is sufficient either, because that's
equally hand-wavy and a restatement of ghost in the machine. At least that's
how I parse it.

~~~
naasking
> If that were the case, why can't I simply design an artificial brain, a
> cyber brain, and upload myself in 1000 bodies?

What makes you think you can't, at least in principle?

~~~
yarou
Because it necessarily violates quantum no-cloning. I think the state of any
human at time t is a quantum state, more specifically a linear superposition
of n distinct states with probability amplitudes.

I don't necessarily agree with the lookup table analogy - sure, consciousness
is finite, but it is also a chaotic system. Else, it would be extremely easy
to predict every possible set of actions I will make in my entire life. As
Aaronson mentions, I don't think you could do that without killing me using
something like Laplace's demon.

It's relatively easy to create a simulation of me - that's merely a problem of
data. A simulation wouldn't be conscious though, it would be _like_ me, but
not _me_ proper.

I think there is some intrinsic property that gives rise to consciousness, as
of yet undiscovered. Hard to say though, after all, are we all not
information?

~~~
naasking
It's strange that you find the physicalist account "hand-wavy", but then use a
physicalist model to dispute mind uploads. So do you accept physicalism or
don't you?

------
Lapsa
reminds me McKenna`s timewave zero

------
ursulets23
I think It's all about what you think consciousness is... it all surrounds
that...

------
ZanyProgrammer
TIL: Scott Aaronson really has a bee in his bonnet about Penrose. Well, not
really just learned, if you've read his books/writings you'd know that. But he
really seems like he feels the need to go after him.

------
hackney
God that was long. wew I think it all boils down to literally putting too much
attention into physical objects. Can it be done? Of course there are actually
many examples and none of the resulting objects were healthy to be around, and
for obvious reason. Consciousness is meant to explore the universe and the
"idea" that suddenly we should step backwards into a robot (my office), after
a million years of evolution is absolutely and perfectly ludicrous to even
suggest. You want your self driving cars and self cleaning kitchens. A machine
can never replace a partner or ally, or child for that matter. Ultimately if
you want to create a conscious being it would have to be on the scale of a
planet in order to achieve what we are talking about.

------
crag
The question isn't "can" but "should". Should we, assuming we could, create
conscious (conscious meaning being self-aware and having the ability to learn)
machines? There have been hundreds of books, movies, games - about this issue
and all, come to the same conclusion... NO.

~~~
tim333
I'm not sure they've concluded it's a bad idea, more that the machines and us
living happily ever after makes for a dull movie.

~~~
pasquinelli
right. those books and movies and whatnot express human anxiety about
artificial consciousness. whether that anxiety is well-founded is really
beside the point.

