
No Ghost in the Machine - ordiblah
https://theamericanscholar.org/no-ghost-in-the-machine/
======
codeulike
People reading this on Hackers News will of course agree with his main point -
Our computers can't 'Think' as such and when we talk about them as if they
can, we're setting up for misunderstandings. But he seems to also be saying
that we'll never get there because computers just run algorithms. Arguments
like that always remind me of this bit from Iain Bank's 'A Few Notes On The
Culture'

 _Certainly there are arguments against the possibility of [strong] Artificial
Intelligence, but they tend to boil down to one of three assertions: one, that
there is some vital field or other presently intangible influence exclusive to
biological life - perhaps even carbon-based biological life - which may
eventually fall within the remit of scientific understanding but which cannot
be emulated in any other form (all of which is neither impossible nor likely);
two, that self-awareness resides in a supernatural soul - presumably linked to
a broad-based occult system involving gods or a god, reincarnation or whatever
- and which one assumes can never be understood scientifically (equally
improbable, though I do write as an atheist); and, three, that matter cannot
become self-aware (or more precisely that it cannot support any informational
formulation which might be said to be self-aware or taken together with its
material substrate exhibit the signs of self-awareness). ...I leave all the
more than nominally self-aware readers to spot the logical problem with that
argument._

~~~
xamuel
People in this debate miss a subtle issue, and it causes no end of confusion.
The subtle issue is this. There is a difference between (1) accidentally
building a computer which happens to think, and (2) intentionally building a
computer which thinks, and knowing that it thinks.

Obviously (1) is possible, because for any particular configuration of atoms,
it is theoretically possible for someone to assemble atoms together in that
configuration. I could randomly put atoms together and, by sheer luck,
assemble a living, breathing Abraham Lincoln. But since it happened by random
chance, I wouldn't _know for sure_ that this was really a living, breathing
Abraham Lincoln. I might observe it for awhile and notice that, golly, it sure
does seem to be a living, breathing Abraham Lincoln. But I could never rule
out the possibility that it only looks like that initially and that it'll
eventually collapse or fall into some infinite loop or something.

Intentionally building a machine and KNOWING that it has such-and-such
properties of a thinking machine, is much more interesting. Such KNOWING is,
itself, an act of thinking, and so there arises an interplay between the
thinking of the created machine, and the thinking of the creator. Then you can
start asking questions about whether the created machine, being a thinker,
could also itself intentionally create another thinking machine, and so on.

If you're tired of the endless non-productive debate about whether thinking
machines are possible, and you'd like to go to the next level, and take steps
toward actually quantifying things, I'd invite you to read some papers I've
written on the subject. To lessen the learning curve, here are some very
approachable slides, and if you like those slides, you can click through to
one of the papers.
[https://semitrivial.github.io/MeasuringIntelligence2019.pdf](https://semitrivial.github.io/MeasuringIntelligence2019.pdf)

~~~
jbattle
Isn't this the same as the "problem of other minds"? How do I know that YOU mr
xamuel are actually sentient?!

If we get to the point where a machine demonstrates "sufficiently" complex
behavior, society will quickly accept them as sentient. This is something we
are primed to do. Hell, people ascribe all sorts of intentionality to their
dog.

After a few years, society at large will accept these machines as sentient,
and it'll be the same philosophers arguing about "but are they REALLY
sentient" that have been arguing the problem of other minds since ... Gorgias
in 4th century BC?

You raise a really interesting point / distinction. I'm just arguing that
society will race past this question without hesitation.

~~~
xamuel
Bonus points for mentioning Gorgias :)

Re: wouldn't society race past this question without hesitation [if someone
got lucky and produced what appears to be strong AI by just randomly throwing
atoms together]?

Yes, and in fact some people would argue that is precisely what has in fact
happened, the random atom-throwing-together process being called "evolution".
And as you point out, philosophers would continue to debate about "is it
REALLY strong AI?", just as nowadays philosophers debate about whether humans
are machines or not.

These hypothetical debates will always exist, nothing will ever change that,
not even if someone randomly creates a true strong AI by sheer luck.

My point is that there's a whole other half of the field, which ISN'T doomed
to perpetual fruitless philosophical debating. The question of whether
intelligent machines can exist is one that is doomed to fruitless debates
forever. But the question of whether we can build specific machines and know
with mathematical certainty how quantitatively intelligent they are--- _that_
is a much more fruitful area where progress can be made, theorems can be
proved, mathematics can be applied, etc.

~~~
TheOtherHobbes
Unfortunately there's no agreement that we can actually quantify intelligence
in a useful way, and believing that "ability to reason" and intelligence are
synonymous, never mind evidence of sentience, is clearly incorrect.

Whatever human intelligence is, it has multiple dimensions, and being "able to
reason" \- whatever you assume that means - is a tiny element of a bigger
picture.

It's practically a universal law of AI that proponents of AI inevitably
understand AI in terms of their own niche interests - whether those are chess,
go, e-sports, music, or driving. Only a field like CS could conceive of a test
where the ability to _emulate a conversation by typing lines of text on a
terminal_ might be considered an indicator of operational sentience.

In reality all but the absolutely dumbest humans can operate a body fairly
effortlessly, feed themselves and keep themselves clean, express
characteristic preferences and act on them with agency, read and respond to
human expressions and emotional cues, communicate internal emotional and
psychological states verbally and non-verbally, improvise solutions to simple
problems, maintain a fairly consistent emotional and psychological outlook
that nonetheless develops over time, and make occasional surprising and
unexpected - but not inhumanly weird - statements, observations, and actions.

How much of that is considered "being able to reason"?

If there's a way to quantify ability in all of the above, I'm not aware of it.

~~~
xamuel
As a whole, an AGI is so general that we'll probably never be able to quantify
its intelligence level. But we can quantify the intelligence of various "cross
sections" of an AGI. As an extremely simple example, whatever the heck an AGI
is, presumably an AGI can play chess. Therefore, we could instruct an AGI to
compete in chess games and we could compute its Elo rating. This wouldn't be a
measure of the AGI as a whole, but it would be a measure of the AGI as a
chess-player.

More generally, we can consider the AGI to be a reinforcement learning agent:
not that the AGI reduces to nothing but a reinforcement learning agent (that
would be ridiculous), but rather, whatever the heck an AGI is, an AGI would
certainly be capable of performing in reinforcement learning environments.
Various methods have been proposed to measure RL agent intelligence, ranging
from more practical to more philosophical.

Again, an AGI can be considered as a knowing agent, and identified with the
formal set of mathematical theorems it knows in some language. Again, this
isn't to say that's ALL the AGI is---that would be absurd. But it is one facet
of the AGI. Whatever the heck the AGI is, it presumably knows some set of
mathematical theorems in any given language (possibly the empty set, possibly
a non-consistent set, etc.) There are various ways of classifying and even
quantifying sets of mathematical theorems in a given language. See the slides
I linked above for some details on this particular cross section.

------
empath75
All these articles make the same basic argument, which is “Non-intelligent
processes cannot give rise to intelligence.”

Which is about as true as non-hurricane particles cannot give rise to a
hurricane.

We already know that the human mind is full of non-intelligent atoms, and
perhaps non-intelligent neurons, and yet here we are.

Any argument that makes the case that computers cannot be intelligent is also
making the case that humans can’t, either.

~~~
vidarh
Well, speaking as an atheist who would like to think that the brain is purely
cause and effect, what this boils down to is whether or not you believe that,
or if you believe that somehow there are souls, and "something" interacts with
a brain and imbues it with something we can't simulate that causes
consciousness.

It's tempting to call that bullshit, but we don't know what consciousness _is_
, and it's proving very elusive to find a physical way to explain it. And,
hey, maybe we're all in a simulation and our minds are simulated outside the
main simulation and so the only "conscious" entities are those with an
external simulation feeding data into our universe. I'd tend to to think that
rather than "ancestor simulations" like in the simulation argument, it's more
likely we're in some game in that case.

Even a purely physical explanation does not preclude some physical process
we're unaware of that involves some interaction with something physical that
is in effect a black box from our point of view, and something we may not be
able to replicate from "inside" the observable universe. E.g. imagine we're
constrained to however many dimensions, but consciousness involves a process
partially outside "our" dimensions.

Ultimately I'd like to think we find a purely physical/mechanistic model of
consciousness that allows us to replicate it in software; the slightly less
optimistic view is that we'll need some specialized hardware. The pessimistic
view is that there's something involved that means it takes something very
similar to a squishy biological brain to tap into whatever gives rise to
consciousness. But we could also hit a wall.

~~~
gwd
> Well, speaking as an atheist who would like to think that the brain is
> purely cause and effect,

Even if you believe there are souls created by God, you _also_ believe that
you can reliably cause (provoke? induce?) God to create a soul and attach it
to a specific bit of matter, by merely engaging in a certain pleasurable
activity.

Supposing that we could create an artificial life form with the same brain
structures as a human, God could certainly create and attach a soul if he
chose. Given his general willingness to create souls in response to our
actions, is there a strong reason to believe he wouldn't imbue a machine
capable of human-like thought with a soul?

And, how would we know whether this had happened or not? If we can't create a
test capable of distinguishing a group of atoms with a soul from a group of
atoms without a soul, isn't it safer to treat them both the same?

~~~
vidarh
> Supposing that we could create an artificial life form with the same brain
> structures as a human, God could certainly create and attach a soul if he
> chose. Given his general willingness to create souls in response to our
> actions, is there a strong reason to believe he wouldn't imbue a machine
> capable of human-like thought with a soul?

I'd say we have no way of telling. "God" or a game engine or whatever is
creating those "souls" might be someone looking on with bemusement and
enjoying adding souls to anything we want to see what we come up with, or
might be someone who takes great offense we're meddling in their business, or
might be a dumb rules engine that insists on a specific biological interaction
to happen just so before it instantiates another object. Or there might be
some weird quantum mechanical interaction we don't understand that gets
triggered that happens to interact with another dimension. Or a bored teenager
needs to go out and get another USB dongle to upgrade the number of "soul
simulators" their game allows. Or an endless number of other possibilities.

> And, how would we know whether this had happened or not? If we can't create
> a test capable of distinguishing a group of atoms with a soul from a group
> of atoms without a soul, isn't it safer to treat them both the same?

Currently we don't. I don't even know if you're conscious. We don't understand
intelligence well enough to reliably test for intelligence, and we don't
understand well enough how intelligence relates to consciousness or whether
there is a meaningful distinction.

------
Hokusai
I agree with the main point of the article, but I think that it misses the
bigger picture.

> The computer can’t think

I completely agree that our current software cannot think, at least in a
complex way like humans do. But, our computers can execute Turing Complete
programs. Is human logic beyond that? That will be an incredible discovery if
we could assure that 'Human intelligence' > 'Turing Complete'. My bet is that
such a thing does not exist.

If you take all computers, phones, etc. and you were able to run on them just
one shared program. Is that not enough computing power to be able to think? My
bet is that probably it is. Human brains are NOT infinitely complex.

> When they succeed, we will be back in the situation so lamented by Pollack:
> that of seeing an AI “breakthrough” or “major advance” as just more
> software, which is what it will be.

Or, we are going to discover that humans can't "think" either. That you can
print, in a very big book, all the logic that makes us tick. And this makes
sense. We have already discovered quite a lot of tricks that our visual system
uses to identify images. And it is full of shortcuts and assumptions. And we
cannot change that, if the illusion makes you think that a line is longer than
the other, there is no mental power that allows us to see the truth even when
we have the knowledge.

Machines are not magical, I agree with the main point of the article. It is
just that humans are either. We are matter that can think, and that is a great
thing but not magical.

I think therefore matter can think. I always liked this truism, as it removes
quite a lot of magic from the idea of "thinking".

~~~
RashHeuristics
A raspberry pi, in being turing complete, has the ability to simulate the
fastest supercomputer ever built (just give it a big enough harddrive to
capture the RAM). More speed does not give you any extra ability in causing a
computer to be able to thing. If that was possible (it isnt) then it would be
the software doing all the job, not the speed of the hardware.

------
willis936
“ This was the very question that consumed Arnold, filled him with guilt,
eventually drove him mad. The answer always seemed obvious to me. There is no
threshold that makes us greater than the sum of our parts, no inflection point
at which we become fully alive. We can't define consciousness because
consciousness does not exist. Humans fancy that there's something special
about the way we perceive the world, and yet we live in loops as tight and as
closed as the hosts do.” \- Robert Ford (Anthony Hopkins)

~~~
mooseburger
It's conflating conscious awareness with the existence or non-existence of
free will. It's conscious awareness that is the ineffable mystery, unless you
believe the chair you're sitting on actually feels you sitting on it. I mean,
it has to if you believe the mind is nothing but atoms, because the chair is
also nothing but atoms. There isn't such a thing as emergent properties, or
neurons, those are just models we use to understand bunches of atoms.

~~~
c22
In the same way I accept that I can eat some leaves, but not rocks, that
mercury is a liquid at room temperature while steel is a solid, and that the
latter can be configured into a car I can drive around in, I accept that some
collections of atoms may posess the property of awareness without
necessitating that all do.

~~~
mooseburger
The brain, like the objects you described, poses no mystery to physics. There
isn't anything about the atoms in the brain that suggests there would be
consciousness, or indeed, anything in physics that predicts consciousness
arising, the way that the properties of any other macro object can be
understood in terms of physics.

------
Nasrudith
Common fallacies about computing and agency aside I personally believe
intelligence is a spectrum and an emergent phenomenon is the most convincing
hypothesis. We have already seen grizzly brain damage or defect cases that
result in "missing" aspects. We don't even know all subparts and interactions
of our own let alone what hypothetical alternative arrangements could be made
- even before its processes engage in repurposing.

That however doesn't tell us much until/unless we find a hypothetical
threshold and refine it.

------
ThomPete
So various physical and chemical elements can form ever more complex pattern-
recognizing feedback loops and end up like the conscious beings that we are,
but it's not possible for us to further evolve via/with technology?

That doesn't make much sense.

This all depends on the lenses on see this through.

If one takes the perspective that evolution is a way for our genes to "find
ever better vessels for the information carried in our DNA" then it seems like
the meatvessel we are isn't the most optimal for the future.

~~~
lebuffon
An interesting question is perhaps to ask when did our species become
"conscious"? Neanderthal? Erectus? Habilis? Our last common ancestor with
chimpanzee?

It would seem to me that these were all "conscious" in some way and that there
is a continuum of this "consciousness" thing whatever it is.

My current thinking is that higher level "consciousness" is about levels of
self-awareness. I can be self-aware, but am I aware that I am self-aware?

This implies to me that there must be some mechanisms that provide the ability
to get input from ones own processes to begin to form consciousness. Feedback
loops of some sort. I have a sense that given the right quantity and nature of
feedback sensing abilities combined with a neural network, appropriately
biased to some goal like self preservation, a form of consciousness could
emerge. (?)

Pure speculation...

~~~
qayxc
It's not pure speculation at all - it's a very active field of research that
currently points to the fact that consciousness is a spectrum. There are
animals that recognise themselves in the mirror and have a clear concept of
self versus other. There is evidence for plants communicating with each other
by various means. So asking "when did our species become conscious?" is akin
to asking "when did you become an adult?" and expecting a specific date or
event as a reply...

------
Strilanc
I think for me the article hit peak "oh come on" when it said this:

> _In fact, the computer does not play chess at all, let alone championship
> chess. Chess is a game that has evolved over centuries to pose a tough but
> not utterly discouraging challenge to humans, with regard to specifically
> human strengths and weaknesses. One human capacity it challenges is the
> ability to concentrate; another is memory; a third is what chess players
> call sitzfleisch—the ability to resist the fatigue of sitting still for
> hours. The computer knows nothing of any of these._

The author is complaining that airplanes don't flap their wings and submarines
don't swim. Sure, it's technically correct, but it doesn't change the fact
that airplanes are useful and distinguishing "flying [airplane style]" from
"flying [bird style]" usually isn't.

This gets particularly egregious when the author talks about military robots:

> _“The next ethical minefield is where the intelligent machine, not the man,
> makes the kill decision,” it reads. But no existing or presently conceivable
> robot will ever make such a decision. Whether a computer kills a man or
> dispenses a candy bar, it will do so because its program_

Ah, the robot is killing me "because of its program". It's not DECIDING
[HUMAN] it's only deciding [computer]. What a relief. Problem solved.

> _The “should a man or a robot decide?” debate, then, is wrongly framed. The
> only correct question is whether the person making the decision will be the
> designer-programmer of the robot or an operator working with the robot at
> the moment of its use._

There are endless examples of human decisions to create computer systems not
anticipating the computer decisions that those systems make. It's useful to
then be able to say things like "the car decided to not do anything, to
debounce the am-about-to-hit-something signal, which resulted in the death of
the cyclist" without having to constantly qualify with "the decision was
implemented by a series of hard-coded if-else statements and while loops".
Wasting everyone's time with the triviality "an if-then statement isn't a
DECISION [HUMAN] it's only a DECISION [COMPUTER]" is not a useful contribution
to this particular debate.

Applying human thinking terms to computer systems is a useful shorthand. It's
not going away.

