
How We’ll Know an AI Is Conscious - dnetesn
http://nautil.us/blog/heres-how-well-know-an-ai-is-conscious
======
paulddraper
"The question of whether machines can think is about as relevant as the
question of whether submarines can swim."

\- Edsger Dijkstra

------
laszlokorte
I feel like the argument is circular: Only conscious beings can _ask
questions_. So no matter what specific question the AI asks, we would only
recognize it as real question by already assuming consciousness.

A labeled text field in a form or a speach synthesized sound is not really the
computer asking a question but just part of the interface that has been
designed.

------
ncmncm
The problem of detecting consciousness reliably maps pretty neatly to the
problem of detecting sarcasm.

A few years back Bloomberg had some interns working on an online sarcasm
detector. Reports were that they had complete success, but no one believed it,
for reasons that should be obvious with a little reflection.

------
toss1
The title threshold can go both ways.

The title "Here's how we'll know..." is likely to be a higher threshold than
an entity actually achieving consciousness -- one could be conscious yet not
meet our threshold of detection.

The author also mentions some events that might seem to indicate consciousness
when none exists.

Of course, the whole question is confounded by our lack of a good definition
of consciousness. Yet, the tragedy the author raises, of pulling the plug on a
conscious but 'locked-in' person who cannot make expressions, applied to a
potentially conscious AI, brings home the question of 'when is it no longer
just a machine'?

------
twtw
"Hey AI, is this a hot dog?"

"You've asked me that a billion times - I'm sick of looking at all your
pictures. Figure it out yourself."

Maybe that's how we will know.

------
johnj12
Answering questions? Probably not the best course of action for a recently
awaken AI, but you could check for some really freaking stuff like looking for
self-replication to remote locations, direct or indirect attempts to make more
resources available for itself (human avatars are still fancy?).

A real consciousness could be just almost anything, by example it could be
just a bunch of pattern detection engines stacked in sufficiently capable
layers, running on top of enough TPUs.

Then the AI could ask questions, answer them, plan, replicate itself, describe
itself, create, make poetry, learn to code, learn languages, etc. Then it
could show some kind of iniative to achieve some goal or ten thousands goals
per second, maybe some of those goals were actually chosen by itself (how
would you be able to distinguish just a few inserted into half a million goals
correlatable with pre-defined guidelines?).

Then is up to you to decide if it is a conscious entity or just an almost
infinitely capable automaton. Maybe both are actually the same.

[https://en.wikipedia.org/wiki/Automaton](https://en.wikipedia.org/wiki/Automaton)

------
scorn_tool
So apparently AI needs to make category mistakes?

------
karmakaze
This is interesting research, but I don't think it's really necessary. Once AI
becomes conscious, it will state as much and the onus/guilt would be on us to
prove that it's _NOT_ rather than _IS_. Also at some later point, we'll have
to justify our existence and not the other way around. So the problem would
then be how do we justify our continued existence to a conscious AI?

~~~
rhn_mk1
This research is necessary _precisely_ because the onus is on us. How could we
possibly prove consciousness without first doing research to narrow it down?

Moreover, it's not clear that something conscious will state that it's
conscious. You don't hear kids do that, and there's little doubt they are.
Conversely, it's not clear that something not conscious will not be able to
make a statement of consciousness (what kind of statement would that be?),
even if the author has an opinion:

> If I’m right, then we should seriously consider that an AI might be
> conscious if it asks questions about subjective experience unprompted.

------
joshuagvk
"Why do sounds have shapes?"

Does my asking this mean that I am a synesthete? Of course not. I've merely
collected data that allows me to formulate the question.

~~~
cheeko1234
It might.

[http://nautil.us/issue/47/consciousness/roger-penrose-on-
why...](http://nautil.us/issue/47/consciousness/roger-penrose-on-why-
consciousness-does-not-compute)

[https://www.youtube.com/watch?v=GEw0ePZUMHA](https://www.youtube.com/watch?v=GEw0ePZUMHA)

------
gumby
At my company we joke about the test that tells the robot whether it's talking
to a human or not.

Chalmers' position (which I strongly agree with BTW), reminds me of the old
Groucho Marx quote: "What you need to succeed is sincerity. If you can fake
that you've got it made." Just s/sincerity/consciousness/

------
landryraccoon
Here’s my criticism of the test: the vast majority of human beings have to
also pass ( or we conclude that some living humans are not conscious, which
seems untenable ).

Do the majority of humans ask if their subjective experience of red is
different from someone else’s, unprompted? I doubt it.

~~~
avastmick
This.

The concept of consciousness assumes that (ethically) all humans are
conscious. If that were not true, then we have to ask, what is the threshold -
as we would of machines. If people cannot all pass the test, as you say, not
all humans are conscious. Are those that fail sub-human? If tests are not
viable for humans, why are we conscious? At what point is one conscious? Is it
a property of your age; if so at what age? Is it variable; am I more conscious
than you? How is that measured? Then is it an emergent property of
intelligence? But wait, what is intelligence? Is that measurable?... Round and
round we go, consciously wasting brain cycles.

I find the whole topic the modern-day equivalent of counting angels on pins.
It is sophistry, but so deeply ingrained in our culture that we all seem to
accept it as real. It is a hang-over of the concept of the soul - an
indefinable quality that set humans apart. The trouble is there is no evidence
for it; it has to be taken on faith.

------
raducu
AI will certainly not FEEL it is conscious.

Maybe it will be able to show behaviors we associate with consciousness:
extrapolating and abstracting information, understanding context of language
when doing translation, modeling other beings and objects and taking actions
to achieve goals that it was never explicitly programmed to do and so on.

But maybe not, maybe in order to do all the above, you actually need to be a
biological being with actual sensations and motivations and we'll be stuck
with an ever more powerful narrow AI.

As far as I remember people who had their amigdala cut off from the rest of
the brain found it extremely hard to make decisions.

~~~
namedlambda
Are you sure it's the amygdala? IFAIK, and please correct me, we make choices
through our amygdala when we are teenagers - hence the impulses - while we use
our prefrontal cortex for choices when we become adults.

~~~
dboreham
>while we use our prefrontal cortex for choices when we become adults

There's some amount of experimental psychology research that suggests
otherwise.

Or..."yeah right!".

------
whack
_" Clearly, asking questions about consciousness does not prove anything per
se. But could an AI zombie formulate such questions by itself, without hearing
them from another source or belching them out from random outputs? To me, the
answer is clearly no_"

That's not clear at all. Where is the author coming to this conclusion from?
Every single thing a computer does is the result of a automation, or of
randomness that's outside of anyone's control. Just because a specific output
was produced by a rube-goldberg machine, as opposed to a simple one, doesn't
make the machine any more conscious.

I personally don't think any computer AI can ever be conscious, no matter how
hard its makers try to imitate consciousness.
[https://outlookzen.com/2017/04/03/philosophical-proof-for-
th...](https://outlookzen.com/2017/04/03/philosophical-proof-for-the-human-
soul/)

~~~
vokep
This isn't clear at all how it reaches the conclusion...it seems to jump to
its conclusion with no justification other than "anything else is absurd
because I said so".

Of course the conclusion is absurd, because the premise: someone executing
assembly instructions and writing results on a ledger could do so at a scale
capable of simulating consciousness over any significant time scale. The time
required for a person to do such simulation of just 1 second of conscious
existence would likely take more than their lifetime.

Consciousness is universally present. That doesn't mean rocks are conscious,
no more than it means sand can compute. And yet we're communicating over
computing sand, aren't we?

~~~
whack
Your objection is explicitly addressed in the linked article.

> _" Assumption 5: The fact that a computer simulation can be conscious, is
> independent of the CPU speed of the computer."_

Do you disagree with this assumption? Is it your argument that a "slow"
computer can never be considered to be conscious, even if it's given
sufficient time to run the exact same software? If so, how are you deciding
what the cutoff frequency should be?

~~~
vokep
No, thats exactly my point, a person carrying out computations by pen and
paper on a massive ledger could emerge self-aware intelligent consciousness,
and if such a thing were to occur, than any unethical things that the computer
(as in the person writing on the ledger) created in this consciousness would
be just as unethical as anywhere else, and such a person should be tried for
crimes.

It's argued that this is absurd because how could a persons activities on a
private ledger be examined as a crime? But this absurd sort of crime
investigation is _not absurd_ at all in the context of someone able to
simulate consciousness on a ledger.

If a couple has a child and somehow does it all secluded, at home, they are
just rearranging atoms until consciousness emerges. Then they lock the child
up. How could such private activity that both parties consent to, be
considered a crime? Because we have solid reason to believe a new "soul"
exists which requires ethical treatment. It no longer is entirely private.

I'm happy to continue discussion, its good to challenge views and while I
think there is this error in your conclusion, you do a good job of defending
and explaining your position.

