

Top Artificial Intelligence system is as smart as a 4-year-old - tokenadult
http://www.computerworld.com/s/article/9240801/Top_Artificial_Intelligence_system_is_as_smart_as_a_4_year_old

======
saosebastiao
I should have expected to see a bunch of blather about AI overpromising and
underdelivering. Nevermind the fact that Expert systems such as the
descendants of Mycin and DART have been making billions of dollars in profits
for decades when applied in specialized domains like logistics and health
diagnostics. Nevermind the fact that we can't go a week on HN without seeing
an article gushing about driverless cars. Nevermind the fact that mass
production of everything from cars to toothbrushes is impossible/infeasible
without it. Nevermind the fact that nearly all forms and modes of mass
transportation are scheduled using algorithms developed in AI research.
Nevermind the fact that you can actually use email, thanks to the AI research
underpinning spam filtering. Nevermind the fact that computers have beat the
best humans in some of the most well known games of strategy and trivia.
Nevermind the fact that one of today's largest corporations, Google, makes
nearly all of its profit from a single application of AI.

No. AI is just making a bunch of ridiculous promises that it never intends to
keep...solely because _general_ intelligence isn't f __*ing Skynet yet and
because Siri can 't tell the difference between Ichiro and Itchy Euro.

~~~
Houshalter
Be glad that we don't have AGI yet. The world would be a very different place
with machines much smarter than us.

~~~
Jach
I'm sad because such a world is probably a better one, so long as the AGI is
Friendly and doesn't turn us into paperclips or worse.

~~~
Houshalter
I don't know. It honestly sounds boring to live in a world where all our
problems have been solved and there is nothing left to do or accomplish. Maybe
we will all live in simulated realities, but that sounds kind of dystopic to
me.

~~~
Jach
That sounds more like a failure of imagination than an argument that there
will be no more fun or challenge or creative endeavors in the universe once
the worst of our problems are solved. Do you really think that in a future
where you no longer have to worry about death and taxes, and you have a
superior brain which you can modify however you want, thus ending akrasia and
unwanted depression among other things like having an even deeper sense of fun
or an understanding of new problems we can't even fathom yet at our current
development, that you will have less fun than you're having right now, today,
reading HN?

You might like to read
[http://lesswrong.com/lw/xy/the_fun_theory_sequence/](http://lesswrong.com/lw/xy/the_fun_theory_sequence/)

~~~
Houshalter
What would be the point in doing anything? Anything you could build, program,
or discover, could be done ten times better by an AI. It could probably even
design entertainment, like movies, games, music, etc, far better than human
artists.

A version of myself with vastly increased intelligence _sounds_ interesting,
but I'm not even sure it would be _me_. If you make enough modifications to
your brain you become a completely different being. Different personality and
thought process and little resemblance to who I am now. Which is disturbing to
me at least. Along the thought of living inside of a computer doesn't exactly
sound pleasant.

Even society itself might not exist anymore. Why would people interact with
each other if there is nothing left to talk about that can't be instantly
communicated? Why would people even spend time with each other if there are
virtual realities to live in that are far more "fun". Spending eternity in a
fantasy world sounds awful.

Yes I want to cure diseases and make everyone rich, but if you keep going,
make a machine that solves every last minor problem and does everything there
is to do, then there is nothing left for us.

And that's assuming that friendly AI is even possible, which I doubt it is,
and I doubt will be discovered before AGI anyways. But either way the world we
know will be utterly destroyed.

------
tokenadult
A friend of mine who knows Professor Robert Sloan

[http://www.cs.uic.edu/Main/Faculty-Area](http://www.cs.uic.edu/Main/Faculty-
Area)

(I've met Professor Sloan directly once) told me about the article submitted
here. He has a conference paper coming out for the AAAI conference next week,

[http://www.aaai.org/Conferences/AAAI/aaai13.php](http://www.aaai.org/Conferences/AAAI/aaai13.php)

with more details about the research. It looks, from the first few comments
submitted here, like many Hacker News readers think either

a) that the article wasn't sufficiently respectful of the field of artificial
intelligence,

or

b) that the field of artificial intelligence deserves no respect.

But my understanding of Professor Sloan is that he takes artificial
intelligence, his main current topic of computer science research, very
seriously, and he is well aware of the societal importance of artificial
intelligence research. Maybe the message was lost in the ComputerWorld
reporter's treatment, but perhaps the conference paper will set the record
straight.

On my part, I was glad our mutual friend told me about this article, as only
today I completed an extensive edit of the Wikipedia article about IQ
classification,

[http://en.wikipedia.org/wiki/IQ_classification](http://en.wikipedia.org/wiki/IQ_classification)

so I've been pickling myself in scholarly writings on IQ testing recently. My
friend is aware of that, and I think shared this link because of the angle of
giving an IQ test designed for a human child (the WPPSI is strictly a
preschool-age test) to an artificial intelligence system. Of course, we expect
artificial intelligence systems to do different things from what preschool
children do, so it's not surprising that an "expert" artificial intelligence
system might not score high on a human IQ test. I'll read the professional
publications when they come out to find out more.

Are any of you going to the AAAI meeting?

~~~
tantalor
The article says the AI performed well on vocabulary, but poorly on
comprehension, which resulted in "4 year old". I wonder how well it performed
on each section separately? Was its comprehension nil? (as we'd expect)

~~~
tokenadult
Based on what I know about the WPPSI "comprehension" subtest, from taking a
Wechsler Adult Intelligence Scales Revised (WAIS-R) test in the early 1990s,
and from reading practitioners' manuals about IQ test administration since
then, I would expert an expert AI system to correctly answer some but not all
of the WPPSI comprehension subtest items. I wonder if the professional paper
on this issue will provide more details of the system's subtest scores for
each WPPSI subtest.

------
dschiptsov
This is, of course, nonsensical claim. It might be able to recognize texts or
make logical inferences like a 4 year old, but it is incapable of unassisted
acquiring of knowledge just by exploring an environment, a task any for-year-
olds routinely do most of the time.) In other words, systems could do some
very specific and restricted task at the performance level comparable with a 4
y.o. But it is not "smartness".

~~~
saosebastiao
[http://en.wikipedia.org/wiki/AI_effect](http://en.wikipedia.org/wiki/AI_effect)

~~~
Houshalter
This isn't really the AI effect. Manually entering in facts and actually
learning them from observations are very different things.

~~~
saosebastiao
Machine learning can be defined as the machine augmented aquisition of
knowledge through observation. It has been an integral part of AI research
since 1957. The fact that you have separated Machine Learning from Artificial
Intelligence is a _perfect_ example of the AI effect.

------
EthanHeilman
AI, making absurd claims since 1956.

~~~
dschiptsov
It is not AI, it is computerworld - very close to cosmopolitan.)

~~~
tekromancr
Ah, yes. "Five ways to give your hard drive the best defraging ever!"

------
spot
how many times have you read this about AI (quoting from the story now):

Basically, it's difficult to program common sense because scientists haven't
yet figured out how to give systems knowledge about things that humans find
obvious, like the fact that ice feels cold.

"All of us know a huge number of things," said Sloan.

------
rollo_tommasi
So the story is essentially that a computer program was able to succeed at a
pattern-recognition exercise, correct? Or am I missing something...?

------
MichaelAza
> "All of us know a huge number of things," said Sloan. "As babies, we crawled
> around and yanked on things and learned that things fall. We yanked on other
> things and learned that dogs and cats don't appreciate having their tails
> pulled."

Then let the damned thing explore! Why do we make such huge advances in
robotics if not for this? A baby-like robot shouldn't be _that_ hard to make.
I'm no AI expert but couldn't we just throw our best learning AI in there and
give the thing a couple of years?

Why are we trying to short-circuit human learning instead of mimicking it?

~~~
TuringTest
Leaving a robot to explore and learn wouldn't make it any good. If you look at
the successes and failures of AI listed in the article, modern AI is well
suited for sensory recognition and pattern matching, and it's bad at cognitive
reasoning in the abstract - just as it has always been. There are logical
reasoners, of course, but they can only reason about whatever the programmer
has put previously in the system as inference rules.

Now letting a robot roam around the premises wouldn't be much different than
feeding it with slides of sensory stimuli.

The kind of experiences that the bot could only learn from moving around are
beyond its sense-making; it can't gain any significant advantage from having
wheels and hands and chasing the environment, because it doesn't know how to
use them to enhance its knowledge and learn new kinds of things, in the way
that a toddler can.

The only approach I know that seems promising in that respect is IBM's Watson
- I'm not sure how it works exactly, but even if it's doing basic pattern
recognition, the huge scale of parallel process and the enormous data corpus
that it's feeded into Watson ''might'' just be what it takes to achieve sense-
making in an emergent way.

------
waster
I would also argue that for intelligence to be comparable to a human,
creativity must be measured accurately. Of course, this is something that
classic IQ tests don't measure well, so maybe that's hoping for too much.

~~~
Houshalter
What do you mean by creativity?

~~~
waster
[https://en.wikipedia.org/wiki/Creativity](https://en.wikipedia.org/wiki/Creativity)

------
AlexFinks
To me, the mark of an artificial intelligence is _general analogizing ability_
, and that's not what we have here.

A system capable of general analogy could potentially write its own drivers
with some light scaffolding or guidance. A system capable of general analogy
would be able to form causal models of our world, and to be sensitive to the
differences between mere correlation and correlation with causal potential,
just as rats and ravens do.

This system has not yet even tackled the intelligence of rats and ravens.

------
DougWebb
Skynet isn't evil; it's just over tired and needs a nap.

------
ars
So basically it did like a 4 year old on the knowledge portion, and failed
miserably on the intelligence portion?

I'm actually surprised that it only did like a 4 year on the knowledge part, I
would have assumed computers would do better.

We aren't anywhere near AI, but AK (artificial knowledge) exists.

------
capkutay
I'm not an expert in algorithms involving AI, but the ones I've been exposed
share the characteristic of being brute force with giant if-then-else trees of
possible decisions for the computer to make. It seems like any possible
outcome a human can't for see will not be picked up by a computer either.

~~~
saosebastiao
Which ones are you talking about? I have only a superficial exposure to them
(mostly Expert Systems, Machine Learning, and Optimization contexts) but none
of the ones I have seen come anything close to if-then-else trees.
Okay...maybe Random Forests, but they are only if-then-else trees in the
evaluation/prediction stage.

~~~
capkutay
I meant search trees such as the ones used in minimax. I think of it as "if-
then-else" in more of a colloquial sense, may be an incorrect description.

------
ryan-allen
I read this as "Top Artificial Intelligence System is a Smart Ass", then
noticed "a 4 year old".

------
cupcake-unicorn
I can guarantee you it wouldn't even come close to matching the natural
language skills of a 4 year old. If so, that would be a major breakthrough for
computational linguistics and NLP, and yet I haven't heard anything about
that.

------
jimmaswell
really, a 4-year-old? I thought even a cat wasn't possible yet (I remember a
story about someone failing to make one as good as a cat). That's impressive,
then.

~~~
Ellipsis753
I think that specific one was where someone simulated the same number of
neurons that a cat had. However it failed because a virtual neuron isn't the
same as a cat's and the training algorithms obviously weren't very similar.

Other AI projects vastly excel humans in some fields though (arguably chess is
an example of this).

This link seems to focus on decisions making and the ability to appear normal
in general. In this way I strongly agree with you. I've never seen an AI
chatbot that could hold a conversation as well as your average 4 year old.

~~~
coldtea
> _Other AI projects vastly excel humans in some fields though (arguably chess
> is an example of this)._

That would be relevant if the chess winning programs mimicked the way humans
think -- so that the same intelligence could be transfered in other fields.

As it is, it's no big fit with regards to AI to win in chess by pruning
decision trees and the like -- and it's mostly non transferable to regular
reasoning.

It's like touting the fact that the computer can do 432342/4234234 million
times faster than me as relevant to it being intelligent.

~~~
Someone
_" That would be relevant if the chess winning programs mimicked the way
humans think -- so that the same intelligence could be transfered in other
fields."_

If that were true for human intelligence, the Kasparovs and Carlsen's of the
world could have lucrative side jobs solving e.g, protein folding problems.

I don't think it is as black and white as you claim it to be.

~~~
coldtea
> _If that were true for human intelligence, the Kasparovs and Carlsen 's of
> the world could have lucrative side jobs solving e.g, protein folding
> problems._

Well, for one, great chess players are generally of higher IQ. So great chess
and increased general intelligence ARE correlated.

Now, why should being good at chess also apply to "protein folding"? If
anything, following my logic it would be the opposite: protein folding is more
like number crunching than like the way human chess players think about moves.
If humans played chess like computers, then yes, they would also be good at
protein folding. But humans do differently. For one, the don't actually
consider millions of possible future moves. They prune much more intuitively
and effectively than AI chess engines.

~~~
Someone
You argued:

 _" That would be relevant if the chess winning programs mimicked the way
humans think -- so that the same intelligence could be transfered in other
fields."_

That implies that, in humans, intelligence in chess can be transferred to
other fields. I pointed out that that certainly isn't universally true. I
think that correlation is fairly poor.

I also think the correlation between IQ and great chess playing is not that
great, but don't have evidence for it. Chess playing _ability_, maybe, but
good chess playing requires lots and lots of rote learning that the brightest
humans might find too dull to spend their time on.

Also, one can argue that your claim that humans _" prune much more intuitively
and effectively than AI chess engines."_ isn't true anymore. Humans prune
more, yes, but they also cannot beat today's best computers, so the
'effectively' part is up for discussion. Maybe, they are pruning too much? Or
is it just that their evaluation function is inferior?

