

Artificial Intelligence, Really, Is Pseudo-Intelligence - kevbin
http://www.npr.org/blogs/13.7/2014/11/21/365753466/artificial-intelligence-really-is-pseudo-intelligence

======
zefei
To say something is pseudo-intelligence or true intelligence, one has to
define intelligence rigorously. That is more philosophical than people think,
because how little we scientifically understand about intelligence, or
consciousness, or even life.

For most of people, the intuition of intelligence is a bit naive, and nowadays
intuition is often at the opposite of the truth. It often is along the line of
"looking intelligent and knowing what it is doing”. Something like turing test
describes the first part intuitively. But we soon found out that we wouldn’t
call something “intelligent” even it passes turing test since the machine is
unlikely to know why it is saying something, hence the latter part. But again,
this is about consciousness or “strong AI”, which is even further away from
our understanding than “intelligence”. How do we know that the machines that
“look intelligent” don’t “know what they are doing”? Consciousness is very
hard to define and even harder to inspect, and maybe someday machines begin to
have consciousness even before we realize it? In fact, we already did this to
most animals, thinking they are not intelligent or self-conscious while they
are.

But to me, the psychology behind these definitions is very interesting and
maybe quite important. No matter how we try to define intelligence, we
wouldn’t call machines intelligent until they very much behave like us.
Because to most people, intelligence is what defines humans, so they
intuitively use their feelings of “does that feel human” to benchmark
intelligence. And psychologically, we are exceptionally good at telling
something “not human” from “feeling/looking/speaking like human”. But this is
a very narrow and, if I dare to say, ignorant and self-centric view, because
it implicitly relies on the thinking of “we are far more intelligent than any
other life form, and human's intelligence is the only way intelligence can be
work”.

Think about it, if there’s a super-intelligent being that observes our world,
maybe to it, human’s intelligence is as primitive as fish, and trees aren’t
far behind; and maybe, entities such like countries or the internet are
actually of more intelligent “life” forms, just we haven’t realized that yet
because they don’t look like us at all; and maybe, on the scale of
intelligence from 0 to that super-intelligent, humans, birds and machines are
clustered in a small range, though significantly larger than zero, still not
far from it.

Although the research on human brains and a lot of our own stuff inspired the
research of AI quite a lot, it is perfectly possible that we create something
truly intelligent or conscious without us understanding human intelligence or
consciousness fully. We can have a different kind of “intelligence”, and our
researches are well on their way toward that. It’s very hard to see because we
don’t understand intelligence, and maybe our hardware isn’t powerful enough to
fully utilize our theories.

~~~
lbenes
While I agree with your point that intelligence must be defined rigorously. We
don't need to leave it to the philosophers. After categorizing the types of
definitions, Shane Legg successfully used "Property of an agent that interacts
with its environment so as to successfully achieve goals across a wide range
of environments"[1] for his work in AI.

What Alva Noë is hinting at is the flaw with weak or pseudo-intelligence, as
Alva calls it. As impressive as deep blue is with chess, or Watson with
Jeopardy, they fail the "goals across a wide range of environments" test. This
is clear with deep blue, and Watson once you realize it's just a glorified
search engine.

I often hear people claim Watson learns because it improves based on the
answer in the category. While it does use previous answers to tweak the final
ranking algorithm. All of this is lost once the category is finished. To
really "teach" Watson, you need to add more data to its index and re-index
offline.

We have a workable definition of intelligence. What we're missing is a
fundamental primitive unit of strong AI. Until this is found, AI will always
remain just a bag of tricks, only capable of solving problems its programmers
planner for.

[1]
[https://www.youtube.com/watch?v=V6umr1OP8uo](https://www.youtube.com/watch?v=V6umr1OP8uo)

~~~
baddox
There's still a huge dearth of rigor in defining both the width of a range of
environments, the degree of success in achieving goals, and the significance
of various goals. I'll expound briefly on the first. Who is to say that the
range of "environments" a chess supercomputer faces is narrower than the
environments a human faces? Obviously, we intuit our range of environments to
be wider, but can we explain why rigorously?

~~~
lbenes
It's more than intuition. Both chess and jeopardy are subsets of the human
environment. Watson couldn't handle tic-tac-toe, never-mind chess. Change one
rule of chess, and human could cope but Deep Blue would need to be rewritten.

------
jostmey
The authors comparison of intelligence to biological systems is absurd.
Biological systems are more complicated than necessary. Let me say that again.
Biological systems are overly complicated. This is because Natural selection
has no regard for simplicity. Evolution is a blind process of trial and error
that sticks to what works. So just because the gene networks of an amoeba are
a confusing mess, which they are, does not imply that Intelligent machines
have to be just as complicated.

I wrote a post on this awhile ago:
[https://docs.google.com/document/d/1wlFJtuBgnVkWm1nflKtJAy3b...](https://docs.google.com/document/d/1wlFJtuBgnVkWm1nflKtJAy3bWCuvBAZDH-
GlgadCTCc/edit?usp=sharing)

~~~
nyrulez
This argument would have merit if we were anywhere near biological
intelligence in our "AI". We don't even understand how most of the brain
works, what it does or how it does it. In fact if it was so simple even to re-
create some aspect of the brain, we would have created the super-brain with
the amount of resources we have at our disposal and time we have spent in the
field of AI. It is like saying calculus is unnecessarily complex because all I
can do is simple arithmetic.

~~~
tensor
Why is the brain the benchmark for "intelligence"? We already have machines
that can run circles around the brain on very complex reasoning tasks. True,
we don't have a talking robot that simulates humans, but in many other tasks
we've met or exceeded what humans can do.

I would argue that we already have intelligent machines. What we don't have is
a human simulator.

~~~
baddox
Haven't you heard? At the moment a computer exists which can outperform a
human mind at a certain task, that task can no longer be considered a
criterion for intelligence. Multiplying large numbers doesn't require
intelligence. Chess doesn't require intelligence. Facial recognition doesn't
require intelligence. Proving mathematical theorems doesn't require
intelligence.

~~~
brianberns
There's a widely-accepted standard for judging intelligence called the Turing
Test. Computers are still a long way from passing that test.

~~~
nl
They aren't that far from passing it actually.

One kinda sorta passed it earlier this year, by pretending to be a 13 yo boy.
It was at the annual Turing Test event though[1].

Even if we ignore that, given the recent interest in using RNNs to generate
natural language I'd say we are 18 months from being able to pass it reliably.

[1][http://www.reading.ac.uk/news-and-
events/releases/PR583836.a...](http://www.reading.ac.uk/news-and-
events/releases/PR583836.aspx)

~~~
plikan13
I strongly doubt this. Here's a good way to defeat any computer in a Turing
test: tell a joke, and ask it to explain why it is funny.

~~~
nl
[http://en.m.wikipedia.org/wiki/Computational_humor#Joke_reco...](http://en.m.wikipedia.org/wiki/Computational_humor#Joke_recognition)

------
mbleigh
This is quibbling semantics. I bet there are software simulations for amoeba
behavior indistinguishable from the real thing. It's placing an inordinate
degree of importance on physical reality. Is a perfect simulation different
from the real thing? That is a fundamental philosophical question that can't
simply be assumed to be no.

~~~
heroic
Lets see it this way, does the amoeba simulation ever begin to show signs of
evolving? Even if given a millenia? If no, then it's not intelligence, call it
video playback if you will.

~~~
natch
The author is simply ignorant. This kind of evolution of simulated amoeba-like
organisms was done over 20 years ago with Tierra:
[https://en.wikipedia.org/wiki/Tierra_(computer_simulation)](https://en.wikipedia.org/wiki/Tierra_\(computer_simulation\))

------
rdtsc
This is a pretty short and simple article but I think it is interesting and
has been striking at the core of controversy of what AI is.

Is something that behaves like a human really intelligent? Google Search
answers a lot of questions almost like a smart well educated person. Is it an
intelligent entity? A computer plays chess better than humans. Is it
intelligent?

In this case I think the author wants the entity to behave like biological
organisms to be considered intelligent. This includes autonomy and self-
replication. I can see that. This sort of points to robotics as the field that
will produce these entities. Or maybe computer viruses...?

This is also a classic debate between behaviorists and those like Chomsky who
want to understand the underlying processes of human cognition and
intelligence. I can see both sides and maybe I like Chomsky's more idealistic
view a bit better...

Here are a few easy to read articles that talk about this:

[http://www.theatlantic.com/technology/archive/2012/11/noam-c...](http://www.theatlantic.com/technology/archive/2012/11/noam-
chomsky-on-where-artificial-intelligence-went-wrong/261637/)

[http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-
the-f...](http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-the-fight-
for-the-future-of-ai)

Another interesting one (a bit longer) in the same vein by David Deutsch (yes
that Deutsch, the "multiple Universes physics guy")

[http://aeon.co/magazine/technology/david-deutsch-
artificial-...](http://aeon.co/magazine/technology/david-deutsch-artificial-
intelligence/)

------
stiff
The whole article is based on a false premise - behaviour of unicellular
organisms is now understood very well by biologists and it is clear they are
nothing more than automatons, basically. A simple explanation of amoebas
behaviour can be found here[1], for example:

 _When an amoeba encounters a chemical that attracts it, the chemoattractant
binds to receptors on the amoeba 's surface. The receptor then activates a
protein called a G-protein, which activates a protein called PLC. The PLC
modifies other molecules to create short-lived messengers that will activate
other pathways necessary for movement by the cell._

The story is pretty similar for any other behaviour they exhibit, its just a
simple chain of chemical reactions triggered by an impulse from an
environment. In contrast to many more complicated organisms where we
hypothetically know the same to be the case (like humans?), in unicellular
organisms we actually traced many of such pathways.

People very emotionally cling to their beliefs about intelligence, and this
blinds them to simple logic. This is the same thing at operation that caused
the backslash against evolution theory. In the end I don't see why such
determinism would sadden anyone, our everyday experience is what it is and its
nature is not changed by learning that we react, in the end, somewhat
automatically.

[1] [http://www.ehow.com/info_8517923_ways-amoeba-sees-
responds.h...](http://www.ehow.com/info_8517923_ways-amoeba-sees-
responds.html)

------
baddox
What an astonishingly vapid and frustrating article. We need to get past these
semantic arguments about what constitutes "intelligence." The "do submarines
swim" analogy is cliched, but perfectly appropriate. It doesn't matter how you
define "swim." Submarines are better at certain tasks than human bodies on
their own. Likewise, computer systems that we label "AI" systems are better at
certain tasks than human bodies (brains) on their own.

The author seems to use "intelligence" and "autonomy" interchangeably, which
is troubling, mostly because he makes no attempt at actually defining either.
His claim that amoebas and plants "exhibit an intelligence, an autonomy, an
originality, that far outstrips even the most powerful computers" is ludicrous
under any remotely plausible definitions of the terms. If humans genetically
engineer an amoeba or a plant to change its behavior, is the magical
"autonomy" suddenly gone?

Also, his argument is ridiculous even as semantic arguments go, since
"artificial" isn't _supposed_ to mean "synthetic." The standard dictionary
definitions are "made by humans" and "false or pretending."

------
snyp
I don't think the author has enough knowledge about what is actually going on
in the field of AI to provide reasonable arguments, all she/he spoke about was
Watson and Clocks, i think she/he must understand there is more to AI than
what comes on the news.

------
jasonlfunk
I'm happy to see an article like this on HN. It is much closer to my thoughts
on AI then the general optimistic inevitability of AI one usually sees in the
tech community.

------
nl
Ha! Artificial intelligence us always whatever we can't work out what to do.

As a general principle, if someone claims X is not AI, then they need to
define a test they will accept as a true test of AI.

"The agency and awareness of an amoeba" is achievable now[1].

[1] [http://www.smithsonianmag.com/smart-news/weve-put-worms-
mind...](http://www.smithsonianmag.com/smart-news/weve-put-worms-mind-lego-
robot-body-180953399/)

------
esfandia
I think the author should be using the term "agency" instead of
"intelligence", and then I might agree. The amoeba or the most basic animal
doesn't have much intelligence, but it has drive and desire to survive and
reproduce. That's not true in artificially intelligent beings. And while you
could hard-code such drive (see the Belief-Desire-Intention frameworks in the
multi-agent systems literature), ultimately it will still always be "pseudo"
by definition. The notion of survival in a machine will only ever be a trick
designed by a human to get a machine to do something for us, not something
innate.

Maybe one day we'll implement some Adam and Eve robots with an initial hard-
coded survival DNA, that the robots will abide by and transmit to their
children robots. Then those children robots will evolve and one day think of
building thinking machines (maybe out of some organic material) to work for
them and will think of faking some survival code (maybe using some sort of
protein?) in them to create pseudo-agency... ;)

~~~
baddox
I'm surprised whenever I see what seems to be belief in dualism in ostensibly
serious scientific discussion about _humans_. This author goes so far as to
attribute dualism to _amoebas_.

------
mcphage
That's okay, natural intelligence is really only pseudo-intelligence, too.

------
calhoun137
This article makes a number of very bold claims about poorly defined concepts,
and provides zero evidence to back anything it says up.

------
fogleman
I agree that Watson is nothing like a true artificial intelligence. But I do
not believe that this means we are unable to create such a thing.

I think true AI will be a highly-parallel learning machine, possibly modeled
off of something like the human brain. But I do not think that we need to
fully understand the human brain to create an AI smarter than us.

I think one of the big challenges may be in providing the proper sensors /
inputs to this learning device.

I think once we have a true AI, we won't really understand how it works, just
as we can train a neural network but can't look at the network itself and make
much sense from its parts.

I think it's possible that the road to the singularity may be gradual, and we
might not even realize when it happens.

Edit: Today's xkcd is somewhat relevant:
[http://xkcd.com/1450/](http://xkcd.com/1450/)

------
ChuckMcM
Man that comment thread takes me right back to Freshman philosophy class. I
particularly appreciated the notion of an Amoeba having 'agency'. Caveat the
mechanical reconstruction and processing of the organics of DNA, systems have
been built and discussed in Artificial Life conferences that not only have the
'agency' of an Amoeba but emergent behaviors in flocking, social adaptation,
and mutual defense. And what does that say about intelligence? Pretty much
nothing, just like all mass being attracted to all other mass clearly
demonstrates the concept of gravity but says nothing at all about what gravity
is.

But it is a lot of fun to speculate about.

------
Xcelerate
"But it's striking that even the simplest forms of life — the amoeba, for
example — exhibit an intelligence, an autonomy, an originality, that far
outstrips even the most powerful computers. A single cell has a life story; it
turns the medium in which it finds itself into an environment and it organizes
that environment into a place of value."

As someone who does work in this field, we're not _that_ far away from a fully
atomistic simulation of a single cell (with a couple assumptions about
reactive force fields, quantum cut-off, numerical stability, time scales, yada
yada...)

------
yk
My problem with the article is, it never explains the difference between
"processing" information and "having" information. And I think that the entire
argument hangs on this distinction. The author essentially claims, that a
amoeba has some essence of live which a computer can not posess, but fails to
show that the distinction is important beyond semantics. ( To clarify, I think
that this is an entirely defendable position, it is just one of the questions
where details of definitions really matter.)

------
Patrick_Devine
The brain is clearly marvelous, but it's not magic. It's a physical object
with rules which govern it just like anything else. To think that we won't be
able to crack the AI nut just smacks of insecurity on the part of the OP. We
are, whether anyone wants to accept it or not, just meat robots.

~~~
3rd3
I think a wide-spread problem in this discussion is that people misinterpret
the word 'just'. It clearly means "just meat, no magic". But what people
mistakenly understand is "just meat, _inferior_ to magic".

------
sysk
I'm starting to believe (hard) AI should get the same status as politics and
religion on HN. Those comment threads debating the true nature of AI are
getting repetitive.

