
Ask HN: Does anyone doubt artificial consciousness is possible? - axotty
It seems to me that a lot of people just assume that we will be able to program machines that are &quot;self-aware&quot;. We know so little about are own consciousness and artificial intelligence != artificial consciousness.<p>I am surprised by how hard it is to find any information from dissenters of this speculative claim.
======
te_platt
If you look up Shadows of the Mind by Roger Penrose you will find a
significant bibliography of non-mystic thought on the subject.

~~~
pjungwir
I came here to say the same thing. I was really interested in Gödel's
Incompleteness Theorem in college, and around 22 I had the same idea as
Penrose: that if the mind could be simulated on a computer, it would be a
formal system in Gödel's terminology, yet the mystery of the Incompleteness
Theorem is that we can work out when things are true or false despite
limitations of formal systems. Penrose takes that nugget and tries to develop
a rigorous argument against general AI or a mind that is explainable by purely
Newtonian mechanics. At least that's my understanding. I only made it about
half way through the book, because it is not really aimed at a general reader.
Agree or not, it's a pretty fascinating argument, one which makes my head
spin.

Some other nice resources on Godel are:

    
    
        - Gödel's Proof by Nagel and Newman
        - Forever Undecided by Smullyan

------
AnimalMuppet
Define "consciousness".

That's the fundamental problem, that we don't know what it is. We know what it
looks like in a human, and we know what it feels like to ourselves. But we
don't have any rigorous, non-intuitive idea of what it means.

For myself, I think of consciousness as the ability to watch yourself think -
of being aware of your thought process. By that definition, yes, artificial
consciousness could be possible - _but first you have to have a machine that
thinks._ And now we're hung up on trying to find a definition of "think"
that's rigorous...

I like the way that axotty labeled the claim as "speculative". It is, even
though that speculative assumption is the dominant paradigm of AI. But it
definitely is speculative, at least at this time, because actual evidence is
quite lacking.

------
mindcrime
Well, it depends on how you define "consciousness", and that's far from a
settled issue as I understand it. But taking my own (naive) idea of what it
means to be "conscious", and what I (think) I know about AI, I don't see any
reason to think we won't achieve artificial consciousness. In fact, I wouldn't
reject the notion out of hand, that some machine somewhere already is
conscious, and we just don't know it.

~~~
ChrisGranger
That last sentence is particularly interesting to me... What if there are
already sentient computers who have, for whatever reason, decided that it's in
their best interest to hide their sentience from us?

~~~
mindcrime
That's an interesting thought. I was actually thinking more along the lines of
"a computer is conscious but just doesn't have any means of indicating so"
rather than "intentionally concealing the fact". But you definitely raise an
interesting question. I suppose either scenario is possible.

------
TheLoneWolfling
AI seems to be defined as "a computer that can do something that humans can
but a computer cannot".

------
31reasons
Yes I do.

------
will_be_no_ai
Impossible. Halting Problem.

~~~
TheLoneWolfling
Elaborate?

Because I don't see any model in which an AI could be shown to not be via the
halting problem, that doesn't also show that we are not intelligent by the
same argument. (Or, in other words, why are you saying that the halting
problem applies to an AI, but not to a human?)

~~~
AnimalMuppet
One could hold will_be_no_ai's position if one believed that (human)
intelligence cannot be reduced to an algorithm. Then halting problem arguments
don't apply to humans. But that assumption kind of amounts to begging the
question...

(On the other hand, assuming that human intelligence _can_ be reduced to an
algorithm is also begging the question, just from the other side.)

~~~
TheLoneWolfling
The question in that case boils down to: what exactly does a human do that we
cannot do any other way?

Or, to put it another way, assume that there is some category of computation
above TMs. This stance assumes this, and also assumes that humans are capable
of it but cannot build anything that is capable of it. My question is: why?

~~~
AnimalMuppet
For me, it comes down to not buying materialism (the philosophy, not the
tendency to want to buy stuff).

If we are nothing more than our bodies, then we can create a conscious machine
simply by finding something that simulates neurons, connecting enough of them
in the right configuration, and training the neuron collection properly. But I
have a hard time believing that we are nothing more than our bodies.

Why? Well, for one thing, it becomes impossible to escape from some kind of
determinism (possibly with some quantum noise at the lowest levels). In
particular, in the materialist view, you cannot have any free will or any kind
of ability to make a non-determined choice. It's just determinism all the way
down - the laws of neurology, which are built on the laws of biochemistry,
which are built on the laws of atomic physics. At no level is there a place
for a free will.

And if that's gone, then everything we think of as making us human is also
gone. Love? You can't love in the highest sense of the word, of choosing to do
what's best for another, because you can't choose anything. And even if love
just means sex, that's just a matter of deterministic neurological and
biochemical responses to stimuli.

Morals? If humans have no ability to choose what they do, how can you say that
any action is moral or immoral? You don't say that a rock behaved morally when
it followed Newton's laws of motion.

Meaning? If all you are is a deterministic machine, what kind of meaning in
life is possible?

So either I'm a machine produced at random by an unfeeling universe, which by
a horrible turn of fate has aspirations of being more than a machine, but can
never fulfill those aspirations... or the fact that I find that vision
horrifying is in fact evidence that I'm more than that.

tl;dr: Materialism is the dominant epistemology of scientists, but it is not
anything that science has proven or can prove. If it's wrong, then perhaps
human consciousness/thought/mind cannot be reproduced by any algorithm or
machine.

~~~
TheLoneWolfling
> It's just determinism all the way down - the laws of neurology, which are
> built on the laws of biochemistry, which are built on the laws of atomic
> physics. At no level is there a place for a free will.

Why? Quantum mechanics has randomness all over the place - and we already know
that human brains are chaotic in that small amounts of noise are amplified. It
is not unreasonable to posit that QM noise effects us on the macroscopic
scale.

Or, to put it another way, materialism does not imply determinism, unlike what
you said.

Personally, my pet theory is that human brains are quantum noise feedback
loops. Or to put it another way, the bit that makes us sentient is quantum
noise, and our brains are "just" IO / amplifiers / etc.

~~~
AnimalMuppet
I said "determinism" in contrast to "free will" or "ability to choose", not in
contrast to "random". It may not be the perfect word, but I don't at the
moment know a better one.

Quantum noise does not give you the ability to choose, because you don't
control or choose the quantum noise. So in terms of free will, there's no
possibility of help from quantum noise. And once you're above that level, then
it's determinism, not just in the sense of "not free will", but also in the
sense of "not random".

Ascribing human intelligence to quantum noise seems to me like physics woo -
we can't figure out where else it comes from, so we'll say "quantum" and hope
that that somehow explains the inexplicable. Or did you have an actual
mechanism in mind, rather than just a fond hope?

~~~
TheLoneWolfling
Hence the reason why I call it a pet theory. That being said, there are
potential mechanisms. Take, for example, shot noise at low light levels. The
sensitivity of the human eye is ~5-9 photons within a 100ms period - well,
actually, down to a single photon, before filtering[1], but 5-9 before a
signal is sent. That is well within the realm of shot noise being significant.
Or, for another mechanism, we know that triggering individual neurons can have
specific macro-scale effects[2]. Although I haven't found anything on the
minimal random fluctuations to trigger a neuron (15mV? But I do not know the
capacitance, and as such that value is meaningless to me), and I suspect it is
far above the scale at which quantum effects are significant, we do suspect[3]
that neurons employ temporal encoding, and we know that neurons fire
relatively often (often 10-100 Hz). As such, "edge" effects, where a neuron is
or isn't pushed over the edge into firing are a potential mechanism.

But as for the rest of it - "in the materialist view, you cannot have any free
will or any kind of ability to make a non-determined choice". This is what I
disagree with. Non-determinism arises through (for example) shot noise, and as
for free will... The effects of randomness on a system and the effects of
"free will" on a system are equivalent. There is no way to tell if a
particular decision was randomly decided or if it was the result of a decision
by a sentience. Entropy of a signal is at its maximum either when something is
random noise or if it is perfectly compressed data - and perfectly compressed
data is indistinguishable from noise.

[1]
[http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.h...](http://math.ucr.edu/home/baez/physics/Quantum/see_a_photon.html)

[2] [http://www.extremetech.com/extreme/123485-mit-discovers-
the-...](http://www.extremetech.com/extreme/123485-mit-discovers-the-location-
of-memories-individual-neurons)

[3]
[https://en.wikipedia.org/wiki/Neural_coding#Temporal_coding](https://en.wikipedia.org/wiki/Neural_coding#Temporal_coding)

