
Human intelligence is overrated (2012) - togelius
http://togelius.blogspot.com/2012/05/human-intelligence-is-overrated.html
======
beat
Multiplying 3842 by 543 is very hard for humans. They're slow and inaccurate.
Computers do it perfectly at incredible speeds.

 _Inventing multiplication_ is something no computer (as we currently
understand them) would ever be able to do.

For another analogy, moving ten tons of rock a thousand feet is something
humans can do, and have done for millennia. It's very slow and difficult. A
bulldozer can do the same thing in minutes. But a bulldozer would never, ever
have a _reason_ to move ten tons of rock.

~~~
arethuza
Doug Lenat's _Automated Mathematician_ (AM) apparently did "discover"
multiplication by itself:

[http://web.onetel.net.uk/~hibou/ai-
course/lenat.txt](http://web.onetel.net.uk/~hibou/ai-course/lenat.txt)

[https://en.wikipedia.org/wiki/Automated_Mathematician](https://en.wikipedia.org/wiki/Automated_Mathematician)

Of course, you could argue that multiplication was implicitly there in the
rules and facts it was programmed with.

~~~
progressive_dad
Let me know when humans invent something not governed by the rules the
universe was programmed with ;-)

~~~
beat
Poetry.

~~~
progressive_dad
I wouldn't go that far. Maybe, "witty", but I wouldn't call it Shakespeare.

------
lmm
> the argument goes both ways here as well: take an arbitrary human (such as
> yourself, if you happen to be human) and try placing this human in the
> cockpit of a landing jet plane, in a semiconductor factory, in the oval
> office of the White House, in the kitchen of a gourmet restaurant, on a
> horseback in Siberia, or equipped with only a spear in the middle of the
> Amazonas jungle. There are humans that have been programmed to do well in
> each of these situations, but it is very unlikely that the human you were
> thinking of (perhaps yourself) would know what to do in more than at most
> one of these situations.

This part seems wrong. I think most humans would make a decent go of most of
those situations. Not as good as an expert by any means, but they'd be capable
of doing _something_ , unlike a computer program.

~~~
drabiega
While your point is true, I'm not sure it is relevant to measuring
intelligence. The human has two advantages in this situation: First off, all
humans have agency: they can take action to do things. If you want the
comparison to be remotely fair you need to compare a program that has agency
to see the same stimuli as the human would and have access to the same range
of actions. Not a particularly difficult programming task.

If you put an untrained system and an average person in the crashing plane
situation, the average person is probably going to do better, but it's because
your average person /has/ some training in what to do in this situation.
They've been exposed to enough culture to know that the joystick is the main
way of controlling the plane and that planes moving towards the ground at
steep angles are likely to be bad, and probably know generally what action to
take that might help the situation.

Fair comparisons would be say, an untrained program vs a baby, or a system
trained with some general knowledge of vehicle control vs an average person.
I'm not really sure that humans win out in either of those comparisons.

~~~
lmm
> First off, all humans have agency: they can take action to do things. If you
> want the comparison to be remotely fair you need to compare a program that
> has agency to see the same stimuli as the human would and have access to the
> same range of actions. Not a particularly difficult programming task.

Huh? I think the whole point is that that's a very difficult task.

> Fair comparisons would be say, an untrained program vs a baby, or a system
> trained with some general knowledge of vehicle control vs an average person.
> I'm not really sure that humans win out in either of those comparisons.

The human got very little direct training. They mostly got exposed to the
environment and picked it up. You can give the computer program the same
number of years and the same environment as the human got, and it won't do
much good.

------
fossuser
All this talk of 'intelligence' gets confusing - it seems like the core
distinction people are talking about most of the time with 'general' or
'strong' AI is really something more like Artificial Consciousness.

This has issues too - if consciousness is an emergent property of a neural net
with the right feedback mechanisms and training material then even if you have
the right feedback mechanisms you could still create an artificial
consciousness that's stupid.

You see this problems in humans - depending on a lifetime of 'test data'
exposure (parents, peers, environment) and the underlying brain neural net
'hardware' you can get people that believe a lot of stupid things.

Maybe consciousness doesn't have to work that way and we're just dealing with
a local maxima of evolution (or some reproduction/sex drive constraint), but
we might end up stumbling on the ability to create a neural net with the
ability to emerge a consciousness before we can craft the type of
consciousness we'd want.

Actually understanding how the system works is harder.

~~~
TheOtherHobbes
Intelligence exists in human culture, not in individual humans. A high IQ has
limited usefulness without a culture it can learn from and operate in. (Ask
any octopus.)

Human culture persists outside of individuals, so the occasional genius can
change culture _and culture stays changed_ because collective memory is
altered. This doesn't happen with individual animal intelligence.

So I don't think we'll ever see a true AGI. What will happen instead is that
collective cultural memory will improve, and connections between human brains
will become faster and deeper - just as they have done for millennia.

If there's ever a singularity it will be created by human brains operating
together in a new and vastly accelerated shared state, not by silicon brains
that have been programmed to appear sentient.

~~~
fossuser
Your first point is what I was trying to get at - basically the culture is the
'test data' and IQ would be the capability of the 'hardware' or genetic make
up of an individual brain/person in this case.

I think your point about intelligence not existing in an individual is too
narrow, but ultimately that comes down to how you decide to define
intelligence.

Not sure I agree that AGI won't happen - in fact I suspect the computing power
we have now is good enough to do it, but we're just missing some insight with
how.

------
prmph
> "Humans are quite stupid in many ways, compared to computers. Let's start
> with the most obvious: they can't count. Ask a human to raise 3425 to the
> power of 542 and watch them sit there for hours trying to work it out.
> Ridiculous"

I hope this guy is just trolling, but in case he is not, this is a tired
argument that should be debunked once and for all.

The brain in the course of seemingly mundane activities (interpreting what we
see, for instance) effectively performs a stupendous amount of complex
calculations per second [1]

What people confuse is conscious calculations vs effective calculations.[2]
The brain does not need to output intermediate results of basic operations
because that is not it's computational objective.

I was actually a bit disappointed at the shallowness of the article; from the
title I was expecting maybe a discussion of how complex even the very concept
of intelligence is, and how speed of calculations does not necessarily equate
to intelligence.

[1] [http://gizmodo.com/an-83-000-processor-supercomputer-only-
ma...](http://gizmodo.com/an-83-000-processor-supercomputer-only-matched-one-
perc-1045026757)

[2] [http://chrisfwestbury.blogspot.com/2014/06/on-processing-
spe...](http://chrisfwestbury.blogspot.com/2014/06/on-processing-speed-of-
human-brain.html)

------
Rhapso
It is an existential horror SciFi novel, but BlindSight by Peter Watts is the
only book I've seen talk about this idea.

Avoiding totally spoiling the book, it asks the question: "Is consciousness
really a survival trait in the long term?"

~~~
beat
That's a great novel. Vampires!

~~~
nabla9
[http://www.rifters.com/blindsight/vampires.htm](http://www.rifters.com/blindsight/vampires.htm)

------
rtkwe
> But the argument goes both ways here as well: take an arbitrary human (such
> as yourself, if you happen to be human) and try placing this human in the
> cockpit of a landing jet plane, in a semiconductor factory, in the oval
> office of the White House, in the kitchen of a gourmet restaurant, on a
> horseback in Siberia, or equipped with only a spear in the middle of the
> Amazonas jungle. There are humans that have been programmed to do well in
> each of these situations, but it is very unlikely that the human you were
> thinking of (perhaps yourself) would know what to do in more than at most
> one of these situations.

At least the random human will have a chance at doing something and if the
situation isn't life or death like the landing plane or the Amazon jungle
could with time actually learn to operate in the new environment even without
interacting with other people who could teach them. That's what's missing from
AI the flexibility to operate in a chaotic environment. Until relatively
recently the slightest thing going wrong in even moving through an area or
performing a simple task like moving a box would completely break.

> (It's hard to understand why anyone would want to be in a plane flown by a
> human, now that there are alternatives.)

Detecting and ignoring spurious inputs and extreme edge cases are one of the
main reasons that I feel better if there's a person alert and at the controls
of a plane. Extreme cases like Quantas 32 where a huge number of systems are
absolutely destroyed would be a huge challenge for modern autopilots which are
great but aren't tested or designed for emergencies. [1]

[1] [http://lifehacker.com/the-power-of-mental-models-how-
flight-...](http://lifehacker.com/the-power-of-mental-models-how-
flight-32-avoided-disas-1765022753)

~~~
tlb
That's partly because nobody has tried very hard to write one piece of
software that's good at landing planes and being the president. Why not just
write two pieces of software? What business model would a combined piece of
software enable?

Specialization of software is mostly a virtue.

~~~
uremog
Isn't it somewhat the same with humans though?

~~~
tlb
Yes. Because a specialized pilot or president will always be better than a
generalist at that particular task.

Part of the argument given for the virtues of generalization is about
availability. Like, if you find yourself the only available person in a plane
over the Amazon, you can improvise a landing. But as the network connects more
things in more places, this becomes less and less important.

------
ktRolster
He makes a flawed argument that computers are smarter than humans: but the
flaw is obvious in that a computer couldn't even make that argument. It takes
a human.

------
mafribe
The article is informal about intelligence and then comes up with a couple of
ad-hoc examples where computers beat humans. It's unclear that they have much
to do with intelligence. The following definition of the term has been
proposed:

    
    
       Intelligence measures an agent’s ability to 
       achieve goals in a wide range of environments. 
    

Togelius even addresses this a little by pointing out that humans have to be
trained to be a pilot of a president. But it is unclear at this point to what
extent computers can be intelligent in this sense. Alpha-Go's reinforcement
learner, probably the most astonishing part of Alpha-Go, was not (to the best
of my knowledge) Go-specific, instead, it was a general-purpose reinforcement
learner. I doubt it can learn much more complicated forms of interaction
without a simple reward function (such as games).

Nevertheless, I'm quite optimistic, but it's far from the foregone conclusion
that the author implies it it.

~~~
SixSigma
Or, as alluded to in the article but with the opposite conclusion :

Intelligence is knowing what to do when you don't know what to do.

------
aardshark
I feel like this article is being disingenuous in order to get discussion
going.

------
cosmin800
How did this article made it on hackernews? Someone take it down please.

~~~
AnimalMuppet
Well, you can flag it, but with 42 points and 54 comments, that's not likely
to do much good...

------
jcoffland
> It would be very easy to invent games that were so complicated that only
> computers could play them; computers could even invent such games
> automatically.

Of the many errors in this article this is one of the most flagrant. Computers
have yet to invent any novel and challenging games. That would take ingenuity,
something computers have failed to demonstrate. However, I suspect the author
is trolling a bit.

~~~
pasquinelli
i believe i've read about automatically --(i believe automatically, and it's a
crucial distinction, but i'm not looking it up right now, so take it with a
grain of salt)-- generated proofs so long they can't be reasonably understood
by a human, but it "convinces" a computer. such a proof isn't a proof in the
sense that it doesn't convince a human being, which is the point of the proof.
but, since it checks out, maybe a better way to think of it is it's a
mathematical proof but for an audience other than human beings.

i wonder if a game is much different from a proof. i mean, we can implement
games as programs, so there is a connection at that level.

~~~
jcoffland
This sounds more like a computer playing a proof game rather than a computer
inventing a proof game.

------
nitwit005
> Now let's take another activity that humans should be good at: game-playing.

It should be just the opposite. There was no evolutionary pressure on humans
to make them play Chess well. The games are interesting to us partly because
they're challenging and make us think differently.

If you want a fair comparison, you need to look at how successful we've been
at making machines to do things animals _were_ facing evolutionary pressure to
do successfully.

Imagine trying to build an ant. A machine with a tiny, power efficient brain,
that coordinates with its fellows to gather resources and build enormous
hives. Could we make computers do that? Maybe eventually, but we're nowhere
close today. Just making a machine walk with grace is at the edge of what we
can do.

------
colourincorrect
Will a robot/AI crossing the street ever realize (without anyone explicitly
telling it) that if it is stuck by a car then it will be incapable of moving?

Will an image recognizer ever "truly understand" what it means for something
to be in a category?

If I set up a Go board such that the pieces on the board resemble a smiley
face or some other pattern, will there ever be a version of AlphaGo that is
able to recognize that? Will it be able to stop playing Go and start placing
pieces on the board that fit into the pattern?

Will an AI be able to make an original joke that is not based off of any
template?

To my knowledge we don't have an AI that can do any of the above, but any
person would find those tasks easy.

------
mouzogu
Comments seem to have gone off on a tangent about multiplication.

What stands out for me is the irony, that ultimately the purpose of AI is not
to be especially good at multiplication but rather to replicate the tenuous,
fragile and indefinable properties of Human Intelligence that can only come
about through some process more sophisticated than logical binary determinism.

------
jolux
No offense to OP or Mr. Togelius but this argument is terrible in almost every
way imaginable and completely unconvincing at that. It makes its case entirely
by subtly introducing straw men and double standards.

Leaving aside that its primary point is made by a redefinition of
“intelligence” (itself being nebulous without providing two definitions), it
completely ignores the fact that computers would be completely unable to do
any of these things had smart people not told them how to. You may say the
same of people as well, but people can learn things independent of knowledge.
Even something as simple as space and time is understood a priori by people
but most computers are oblivious to what these things actually mean.

The arguments about memory are terrible because one might as well say a
library is smarter than a person if all that matters is the accuracy of recall
and the amount of data stored. The computer itself does not know these things
in the same way that we do, if you ask it to find them for you it will search
them the same way you might but more quickly. Knowledge is contextual and
intuitive, and computers currently are not great at context or intuition.

And that’s just knowledge! It’s so easy to demonstrate that human knowledge is
more complex than computer “knowledge” that it’s barely even worth discussing.

I personally think one of the strongest indicators of intelligence in people
is their intuition and the ease at which they adapt to new things, i.e. how
many things come easily to them. Nothing comes easily to a computer.
Everything must be specified clearly and carefully to the computer by a person
who is better at intuition than the computer is. The computer has no way of
knowing whether something is “right” or “wrong” in the sense that those words
intersect with both morality and logic. They might understand that something
is “incorrect,” but that does not carry the negative connotations that “wrong”
does for a human being. Computers have not “computed how to play the game
perfectly,” they were told how to do so through increasing levels of
abstraction. Computers have not “computed how to play the game perfectly” just
as pencils have not learned how to make marks on paper. That a computer does
what you tell it to and by definition can’t do what you don’t tell it to is
evidence enough of its utility as a tool and not a person.

This is, of course, in the current context of AI. I have no doubt that some
day we will create software with the sort of intuition and ability to
contextualize information that people have. But until then, there’s not sense
deluding ourselves that we’re already there. If that were the case, we might
as well have stopped doing computer science research with the Bombe 70 years
ago.

------
thefox
> Ask a human to raise 3425 to the power of 542 and watch them sit there for
> hours trying to work it out.

You can't compare the speed of a computer to the speed of a human. Only
because a human is slow doesn't mean that the human is stupid, nor computer
are smarter. The electric current in a integrated circuit acts near the speed
of light. Light itself is also very _fast_ but not very _intelligent_.

> [...] the world Chess champion has been a computer.

Again, this is a matter of speed. If you give a human being the same amount of
time in relation what a computer had taken to calculate the same steps of a
chess game it would be a more true _comparsion of intelligence_.

> Humans have almost no memory

That's because the human brain isn't trained and not used. We use almost 10%
of our brain. So this comparsion also hinks.

> The face recognition software that Facebook uses can tell the faces of
> millions of people apart.

And again, this is only a matter of training and not a matter of intelligence.

------
drabiega
It seems that any discussion about artificial intelligence eventually devolves
into arguments essentially about whether human brains are magical or
deterministic.

------
crusso
There are tools and there are users of tools. Which is the computer? Which is
the human?

------
PaulHoule
My suspicion is that Chomsky's "language instinct" is actually a derangement
of our ability to reason about probabilities that leads us to make the same
mistakes consistently, thus understand each other.

------
kazinator
"Human intelligence" is a fiction fabricated from cherry-picked examples of
_individual_ intelligence.

------
danielam
If all you have is a hammer, everything looks like a nail...

I'm not familiar with Togelius' other writings so I don't know how fast and
loose the man is with his words, but in isolation, these "arguments" are like
a compilation of youtube comments. Normally I ignore them, but some days I
take the bait.

The question at issue is made out to be "who's more intelligent, computers or
human beings?" when the real question is "what is intelligence in the first
place?". To merely assume some definition because it suites the author's
position is nothing short of question begging.

There also do exist powerful arguments against strong AI and computationalism.
Searle's Chinese room argument is perhaps the best known, but by all
appearances, often unappreciated or misunderstood. The essential point he
makes is that computers are syntactic machines, i.e., machines that transform
strings of symbols (which are intrinsically meaningless) according to
syntactic rules. However, human minds contain semantics (concepts). Because
computers are syntactic machines only, they necessarily do not and cannot
possess semantics. They can simulate semantics when a human being formalizes
semantics by producing syntactic rules for the simulation, but no amount of
syntax ever results in semantics any more than skillfully adding clay to a
sculpture can ever produce a human being. Remember, a computer is anything
that implements anything equivalent to the Turing machine (a formalization of
effective method).

Aristotle, on the other hand, makes a much deeper argument about the nature of
the intellect that can reinforce a restricted form of Searle's argument, viz.,
his arguments can be used to explain why computers lack semantics by showing
that matter per se cannot possess "concepts" as such and apart from particular
instances. This argument is difficult to appreciate without an understanding
of Aristotle's broader metaphysics. However, the outline of the argument is as
follows:

1\. Matter is particular/concrete (e.g., " _this_ tree/ _that_ rose").

2\. Concepts are abstract (e.g., " _Tree_ as a class/ _Redness_ as such").

3\. The intellect, the organ of abstracting concepts from particular
instances, holds concepts.

4\. Therefore, the intellect is not material. QED.

...adding own minor premise and conclusion...

5\. Computers are purely material.

6\. Therefore, computers cannot be intelligent.

Note that "intellect" is not a synonym for "mind". Aristotle distinguishes
such things as imagination (phantasm) from the intellect, the former of which
he argues is material. To better see how concepts are immaterial, consider the
word "tree". You may imagine a tree, or even a number of trees, but the image
is always particular, it is always an image of a particular tree whether real
or not. However, none of these is the concept "tree" which is not particular
(if it were particular, then there could only be one particular tree). You can
repeat the same reflection with anything: every triangle you imagine will be
isosceles, scalene or right-angle and of some particular color, and indeed
something _triangular_ and not a triangle as such.

The general problem here can be related to the problem of qualia (and
intensionality) and thus the mind-body problem introduced by Descartes'
metaphysics and haunting much of philosophical discourse since (even when the
mind is dropped and the body endowed with the powers attributed to the mind).
Note that Aristotle's immaterial intellect is NOT Descartes' mind.

Others who have argued against computationalist or materialist conceptions of
the mind include Kripke and Popper, but there are many in-depth treatments of
the subject that address many of the claims and objections raised by the
computationalists. That being said, I find "AI" (arguably a misnomer) to be a
very interesting field.

~~~
eli_gottlieb
>4\. Therefore, the intellect is not material. QED.

That is face-palmingly bad logic. Can we pretend that famous ancient sages
weren't so blindingly, obviously dumb as to believe one piece of stuff can't
do multiple things?

>The general problem here can be related to the problem of qualia (and
intensionality) and thus the mind-body problem introduced by Descartes'
metaphysics and haunting much of philosophical discourse since (even when the
mind is dropped and the body endowed with the powers attributed to the mind).

I don't see how the acquisition of abstract knowledge has anything to do with
qualia or the mind-body problem. Abstract knowledge is just hierarchical
modeling.

------
known
Intelligence != Knowledge

