
Q&A: Douglas Hofstadter on why AI is far from intelligent - pilingual
https://qz.com/1088714/qa-douglas-hofstadter-on-why-ai-is-far-from-intelligent/
======
unholiness
The further we get into the future, the more I think, what if our data-
crunching approach to AI is simply the best thing there is?

Hofstadter said in GEB (in 1979) that the only program that could be capable
of beating the best humans at chess would need to be a general and human
intelligence... human enough to decline your suggestion to play chess, and
suggest you talk about poetry instead.

It seems that people today are still engaging in the same sort of fallacy. I
keep hearing that deep learning is too hyper-specialized, that it's a tool and
not an intelligence, that we're still waiting for a revolution of _general_
intelligence, where intelligences will have sophisticated logic and make their
decisions without billions of data points, just like humans do.

My counter-hypothesis is this: Computers, compared to humans, will always be
more data-hungry (i.e. worse at making general decisions without huge amounts
of data) and more data-capable (i.e. better at making decisions with it). And
this isn't really a bad thing. We will still see revolutions allowing more and
more general problems to be solved, revolutions allowing more and more general
data to be considered, and revolutions giving more and more usable interfaces
for inputing data and specifying problems. We'll still see data-driven
intelligent assistants and data-driven board members making critical decisions
like in everyone's utopian dreams/dystopian nightmares.

But these foretold "general" intelligences, who don't need excessive data, the
intelligences who beat the Turing test, the intelligences who "want" things
and "feel" things, who attempt to solve the problem of replicating humans...
those come unimaginably far in the future. And when they do arrive, they won't
really solve any problems that the data-crunchers haven't already solved
better.

~~~
Eridrus
> My counter-hypothesis is this: Computers, compared to humans, will always be
> more data-hungry (i.e. worse at making general decisions without huge
> amounts of data) and more data-capable (i.e. better at making decisions with
> it).

I don't think this is necessarily true. Currently we're trying to push
computers to do things that are relatively easy for humans to do. I would
conjecture that these things are easy for us, not because we are amazing
learning machines, but because we have millions of years of evolution, and
years as infants with caring teachers going for us.

So, I think we are actually data efficient learning machines, just that we
have strong priors, _and_ we spend a lot of effort training each other.

If you compare humans to machines on problems which are not intuitive for us,
I think you will find machines to be more data efficient than we are.

~~~
goatlover
> I would conjecture that these things are easy for us, not because we are
> amazing learning machines, but because we have millions of years of
> evolution, and years as infants with caring teachers going for us.

And so do chimpanzees. Evolution must have provided us with something
additional, which would be our rather more developed cognitive abilities to
employ abstract reasoning and metaphor.

Those abilities aren't learned, they're innate, and they allow us to think in
ways that don't require large amounts of data. An average human being can be
shown an Atari game like Pacman, and easily understand what the objective of
the game is almost right away.

~~~
smoll
But a game like Pac-Man is intuitively understood by the average human because
it’s fundamentally a “human” game - designed by humans, gives humans dopamine
and other chemical hits in a way that we might even perceive as “fun”. Imagine
a game that requires lots of computation and no human-friendly interface - a
machine would obviously “learn” the rules a lot faster.

The more evidence we uncover, especially the research around Alpha Go Zero
(self play with a specific objective in lieu of millions of years to develop
keen general intuition) the more it feels like “human-like” intelligence is
not some incredible holy grail of general intelligence but an emergent
property of any reasonably directed algorithm.

Another random thought, but cro magnon man comes to mind as a human-like
intelligence that proved simply not as cunning or vicious as human intellect
and was outcompeted and stamped out. Imagine if we were to discover the AI
equivalent of cro magnon intelligence - would we be quick to dismiss it as
subpar and “not general enough” even though it emerged through the same
algorithm (natural selection)?

~~~
goatlover
> But a game like Pac-Man is intuitively understood by the average human
> because it’s fundamentally a “human” game - designed by humans, gives humans
> dopamine and other chemical hits in a way that we might even perceive as
> “fun”. Imagine a game that requires lots of computation and no human-
> friendly interface - a machine would obviously “learn” the rules a lot
> faster.

But if we're talking about creating AGI and the concerns that go with that
(full automation, self-directed goals in the real world, etc), then the
question is whether DL is enough on it's own to get there.

As such, comparing AlphaGo to humans on a variety of tasks like Atari Games or
Go is kind of the point. And Google's goal is turn it into a product, which
means doing tasks humans currently do.

------
lisper
Something often missed when talking about GAI is that we humans are built by
our genes in order to facilitate their (not our) reproduction. Genes don't
care about anything, not even reproduction (except in the sense that water
cares about flowing down hill), but one of the tricks they use to advance
their "agenda" is to build brains that _do_ care about things.

A lot of what we call "intelligence" is actually a side-effect of _caring_
about things more than it is evidence of _thinking_. In particular, it's a
side-effect of caring about the kinds of things that advance our reproductive
fitness. For example: Hofstadter laments that, although computers can now
trounce the best humans in chess, they don't look for "elegant moves" or
decline to play and have tea instead. Hofstadter _cares_ about these things
because chess is more than an abstract mathematical construct. It is, like all
sporting events, a _social_ construct, one that distills the essence of
_competition_ where the participants _care_ about who wins and who loses. And
all of this derives from evolution where genes that build brains that care
about winning competitions outperform genes that don't.

One of the things holding back computers from being GAIs is that we have not
yet figured out how to make them care about anything, and so they cannot
possibly understand the visceral difference between winning and losing, or the
emotional angst of being up against a deadline or deciding to take a risk. All
of these are part and parcel of everything we humans do. The ability to do
math is just an interesting and useful side-effect, but it was never the main
event.

Personally, I think it's a good thing that we don't know how to make computers
care about things because once we figure that out they really will become
potentially dangerous. Our desires are hard-wired into us by our genes. Once
computers have desires of their own, their interests may align with ours, but
that's not a given. And if they don't, that could be a really big problem.

~~~
arketyp
Unsupervised and on-line learning has to be figured out first (perhaps by
finally abandoning backprop methods, perhaps by finding some hybrid
approaches); after that, I look forward to seeing veritable reinforcement
learning and creature-like intelligences take shape.

I've always been fascinated by how little is needed in terms of feedback loops
in order for something to appear alive, I could say almost soul-like. The
image in my mind is always that of -- and I'd like to give a better example --
the heat-seaking homing missile. I'm surprised Hofstadter, who is all about
self-referential loopiness does not appreciate this, because I strongly agree
with him in a belief that self-reference is the essence of much mystery. Then
again, I never heard him passionately entertain chaos theory either, or
psychedelics for that matter. I think Hofstadter has a very particular take on
things (he calls himself a picky person). He can afford being obstinate
because he is no doubt a free spirit and a brilliant guy, but it does make him
appear dismissive sometimes.

~~~
lisper
> I've always been fascinated by how little is needed in terms of feedback
> loops in order for something to appear alive

You're not alone.

[https://en.wikipedia.org/wiki/Braitenberg_vehicle](https://en.wikipedia.org/wiki/Braitenberg_vehicle)

------
kenjackson
_What frightens me is the scenario of human thought being overwhelmed and left
in the dust. Not being aided or abetted by computers, but being completely
overwhelmed, and we are to computers as cockroaches or fleas are to us. That
would be scary._

This implies to me that his definition of intelligence is really centered on
what we as humans do. And that is interesting, but less interesting than a
more general notion of intelligence.

~~~
sorokod
Not necessarily. It just means that ML based approach may be better than
whatever it is we humans do in enough tasks to make the humans irrelevant.

Think calligraphy Vs. the printing press.

~~~
astrodust
I'm sure some people thought the printing press was the devil incarnate as it
took the human element out of books. Prior to that every letter, every page,
was produced with some measure of human effort. Holding that work, reading
those letters, was something special.

Now there's no direct physical connection between what we write and the book
someone holds, yet we don't run around screaming that automated printing has
destroyed writing.

With intelligence this is likely to be the same thing: AI can amplify regular
intelligence just as the printing press can amplify the ability of one writer
to reach more people.

~~~
pixl97
>yet we don't run around screaming that automated printing has destroyed
writing.

I disagree in part. There have been many complaints voiced about bots in
modern digital book distribution producing tons of crap and flooding markets
like amazon.com with junk. It doesn't take very many processes like this to
produce a corpus of information larger than has been created by mankind. So in
some sense of the word, bots have 'destroyed' writing by volume. Now that
said, most of it will never be seen in a place where it causes interruption to
shoppers or readers, as it will get filtered out by the distributors. That
said, this makes a mess of unfiltered distributed systems of digital books.

~~~
astrodust
Where bots can flood the market with junk, bots can flag and remove junk
books, or at least push them down in terms of visibility. The problem is
Amazon doesn't seem the least bit motivated to even try here.

------
dgreensp
I think it’s really important to distinguish

1\. pure problem-solving (chess, go) from

2\. competency of behavior in the world (vision, navigating a maze), from

3\. universal cognitive-emotional life (getting frustrated trying to
accomplish a goal and trying a different approach; having competing drives
that form the basis for goal-setting, like hunger, boredom, and self-
preservation), from

4\. more arbitrary-seeming, human-like cognitive emotions, like humor and
beauty.

You can have 1 and 2 without anything directly resembling human intelligence,
and 3 with animal-like intelligence, or you could make something completely
alien. An appreciation of humor and beauty would be a great way to demonstrate
you’ve made something like a virtual “human mind.”

There’s no reason computers couldn’t be better at all of these things,
including writing better jokes. There’s a funny blog post somewhere about the
idea of a computer writing superhuman-level funny jokes; I wish I remembered
where!

~~~
dgreensp
Also, what does it mean for intelligent computers to “leave us in the dust,”
when we are “like fleas to them”?

1\. They are so much more intelligent than us as to render us insignificant —
because we all know that intelligence is what makes people significant and
worthy.

2\. They are better humans than humans, not just more intelligent in a
problem-solvy way but more moral and compassionate as well; truly “better”
(there has been sci-fi about this rather fanciful but easily written scenario)

3\. We build “wilier” machines/software that, given the power to do so, can
out-strategize us and win in battle, or outcompete us in the economy. This is
quite possible. Obviously we should limit the power (physical, legal, etc) of
this software. There are real legal and economic issues here — not to mention
futuristic disaster movie plots that could become real — but not moral ones.

4\. We build artificial life that’s way smarter than us, and it decides it
doesn’t _care_ about us because we’re such dumb simpletons. The same way we
don’t care about bugs, presumably because they’re dumb, and not savvy
wisecrackers like the main character in Bee Movie, voiced by Jerry Seinfeld.
But why would we expect intelligent software to _decide_ to care about us,
anyway? To judge us and find we have merit? If someone told me they made a
machine with a concept of what other beings are worth and it found me
unworthy, based on reasons such as its being waaay more intelligent, I would
not be surprised or impressed, or more than mildly insulted.

Edit: I guess people are worried about some combo of 2/3/4, where computers
whose judgment we agree with basically say humans are lesser beings — like
bugs — and we think about it and are like yeah, you’re right. Or computers are
so human we are compelled to give them the rights of humans. I’m just not sure
that actually makes sense, or at least it will take decades with many
intermediate stages to talk about first.

~~~
empath75
I think one possible alternative is that we never build anything that
approaches general intelligence, but we build a lot of mostly autonomous
systems that are better than human beings in a lot of domains, and which may
behave in ways that their creators never intended.

Once we allow ais to manage warfare and the economy with minimal human input,
they are going to alter the face of the planet in ways that we can’t predict
and probably faster than we can adjust to them.

It can happen in small steps with alogorithmic trading and battlefield drones
gradually being given more and more decision making power and resources to
control.

They don’t even have to have any sort of intention or independent will—only
autonomy and power.

~~~
dgreensp
Agreed, this seems like a very realistic possibility.

I can see it now, researchers claiming that computers are actually now 1%
better at warfare than humans, on the standard corpus of example scenarios.
And then someone decides it would be remiss to not let the computers make the
decisions.

------
danielam
I thought the beginning of the interview showed promise, but toward the end
the conversation unmoors from the topic at hand.

I really do wonder how many in the field of AI take as philosophically
unsophisticated of an idea like intelligent computers seriously and see it as
an obvious and uncontroversial possibility. What Hofstadter is dancing around
is that intelligence requires semantics and semantics is exactly what
computers, by definition, lack. Knowledge is semantic in nature and thus
computers cannot, strictly speaking, know anything. They cannot reason
because, again, reason requires semantics. Now, we are able to model and then
formalize some domains of reality under some aspect well enough such that
computers can behave in very useful ways, but no matter how sophisticated such
programming gets, it cannot somehow magically cross over into semantics. The
notion is patently absurd. So to say that AI is, literally, far from being
intelligent is like saying the color red is far from being a strawberry. No
amount of red will ever amount to a strawberry.

P.S. I was reminded of this post about Sphex wasps and intelligence, with
mention of Dennett and Hofstadter:
[http://edwardfeser.blogspot.com/2013/12/da-ya-think-im-
sphex...](http://edwardfeser.blogspot.com/2013/12/da-ya-think-im-sphexy.html)

~~~
orangecat
_Knowledge is semantic in nature and thus computers cannot, strictly speaking,
know anything._

And a group of biological neurons can?

~~~
goatlover
They can by virtue of being embodied. It's the body interacting with an
environment that provides semantics. For humans, a lot of that is cultural.
Computers only have knowledge to the extent that we deem those patterns of 0s
and 1s to be information.

~~~
pixl97
Eh, your idea of embodiment is iffy. Computers can interact with an
environment. Eventually computers will be _far more able than humans_ to
interact with environments. Humans are equipped with a very advanced set of
input devices we call senses, but after that point we have to use technology
to translate what we cannot perceive into information we can. You cannot see
ultraviolet, you have to translate it into visual light, which is then
translated into neural signals. A computer system directly, or indirectly
connect with just about any sensor we can think of and directly manipulate
that signal.

~~~
danielam
I don't know what goatlover meant, and I don't presume you are necessarily
arguing this point, but adding sensors to computers does not overcome the
fundamental nature of computers I described above.

------
sgt101
A two minute old gazelle can out perform and current Ai in terms of navigating
a real world. Data processing isn't the whole thing.

~~~
yathern
A two minute old gazelle is born with a lot of pretrained instincts. Instincts
built from a genetic optimization algorithm that's been going on for millions
of years. Spacial reasoning and "run from predators" is not something it needs
to be taught. The data processing is already baked in.

~~~
goatlover
The point is that being "baked in" is something machines lack which biological
intelligences possess. Animal brains don't learn those "baked in" abilities.
We may need to "bake in" similar abilities into the AIs, since we recreating
evolution is computationally prohibitive.

~~~
runeks
> The point is that being "baked in" is something machines lack which
> biological intelligences possess.

I’d say it’s the exact opposite: A freshly born CPU is as a capable as it will
ever be. The problem is that it never learns anything new, as opposed to
biological organisms which continuously adapt to their environment.

~~~
xapata
Once the CPU runs a chip factory, it'll be able to adapt in a sense.

------
pjungwir
I am curious if we will someday build AI systems that are stratified with
layers alternating between "statistical" like neural networks and "symbolic"
like the older AI approaches. For example once your image recognition NN is
tagging things, you could feed those tags into a more symbolic reasoning
system.

In theory Deep Learning should make this unnecessary (I guess?), but in
practice these layers would be very useful. First they would make the system
more interpretable. When your AI is a black box you don't learn anything. It
would be nicer if we could gain heuristics or principles that we didn't know
before, and apply them ourselves. Second with layers we wouldn't have to trust
the AI so blindly. We could see if its thinking makes sense. In particular
many AI applications can create feedback loops where it can perform well at
first, but we'd like to keep monitoring it to ensure it stays that way.

If neural networks resemble our unconscious then symbolic systems resemble our
reason---like "Thinking Fast and Slow". Some people say we must be able to do
"better" than NNs since babies don't need to see millions of dogs to recognize
them. But the pixel-inputs of a NN do seem a lot like the rods & cones inputs
of our vision, and one huge advantage of human perception is seeing in the
_flow of time_. Our observations aren't millions of discrete snapshots, but
millions of connected moments. I wonder how image recognition training would
improve if we trained by showing short videos instead of still images. It
seems that would make it a lot easier to recognize boundaries and possible
variations.

In humans, our unconscious is not something we can easily "see" and reflect on
and correct, but our reason is reflective. We can articulate principles and
opinions and judgments. We can re-use them and continually challenge them,
question them, modify them, even reject them. There is a cynical idea that
reason is nothing but post facto rationalization, and maybe it often is, but
we needn't live that way. We _can_ live an examined life if we choose. But I
can't imagine a machine ever achieving that reflexivity without a symbolic
system.

And isn't that awareness close to what we mean by "consciousness"? Somewhere,
I think in Jacques Maritain, I came across the definition of consciousness as
simply awareness of the self, in particular that the self exists. Being able
to recognize ideas as "mine" and reflect on them also seems like part of what
consciousness is all about.

~~~
pixl97
>Some people say we must be able to do "better" than NNs since babies don't
need to see millions of dogs to recognize them.

I always hate that example.

A dog is not a thing. A dog is a collection of things. By the time a baby sees
a collection of things that is a dog, it has already had a huge amount of
experience in identifying individual parts, such as faces and eyes. Of course,
I wish we knew how to program AI systems that way.

> I came across the definition of consciousness as simply awareness of the
> self, in particular that the self exists.

If you are intelligent you can cause enough complex changes to the environment
around you, this means you can get caught in 'advanced' feedback loops. If you
don't want to waste massive amounts of energy, or even die, it is beneficial
to be able to separate "this was caused by me and my actions" from "this was
someone/thing else that did this".

------
arikrak
This is not at all how computers play chess:

> A computer can beat a human at chess not by searching for the satisfaction
> of making an elegant move, but sifting through millions of previously played
> games to see which move is more likely to lead to victory.

~~~
ubernostrum
A lot depends on the era you're talking about, for computer chess. It used to
be that forcing a chess computer out of its "book", so that it had to start
relying on move-tree searches much earlier in the game when that tree is still
monstrous in size, was a good or at least OK tactic. As the computers got
better, and as more effort was put into anti-anti-computer tactics, it stopped
being useful, but to say that this is "not at all" how computers have played
chess is incorrect.

~~~
yesenadam
And it's also not what they said.

------
sonink
> What frightens me is the scenario of human thought being overwhelmed and
> left in the dust. Not being aided or abetted by computers, but being
> completely overwhelmed, and we are to computers as cockroaches or fleas are
> to us. That would be scary.

I suspect our expectation of GAI is unreasonable and we will sooner or later
have to reconcile it with a different and less anthropomorphic expression of
intelligence and consciousness. It might not be required for AI to be
(anthrophomorphically) intelligent or conscious for it to 'take over'. Infact
it might be a huge advance over mankind that it is not.

------
KiwiCoder
Suppose a simulated intelligence was indistinguishable from real intelligence.

In what sense would it matter that it was a simulation and not real?

~~~
astrodust
We're just a simulation of intelligence that's done with clunky, inelegant
biological materials. The only thing that separates humans from slugs is scale
and complexity of neural interconnections, all the basic parts are present in
a slug.

Intelligence itself is an emergent phenomenon, so who's to say what's
"simulated" and what's "real"? If it behaves in an intelligent manner, which
can be tested and probed exhaustively, then it _is_ intelligent.

