
Will A.I. Ever Be Smarter Than a Four-Year-Old? - nabla9
https://www.smithsonianmag.com/innovation/will-ai-ever-be-smarter-than-four-year-old-180971259/
======
flavor8
"Ever" is not a useful word here. It's like saying "Will four year olds ever
be grown in pods?"

A better headline might be "Can AI learn as flexibly as a four year old with
today's known techniques?", to which the answer is of course "No".

~~~
b_tterc_p
Those sound like different questions. The original is asking if it is
theoretically plausible. I don’t think that’s out of line.

~~~
laumars
The OP's point was that adding "ever" in the title means the question isn't
just relevant to our current understanding of what is theoretically plausible
but it also doesn't exclude some currently unknown theory that might be
derived at a distant point in the future.

While I do see - and to an extent agree with - their point, I there is a lot
more wrong with that heading than just the lack of a contextual end date.

~~~
eli_gottlieb
> The OP's point was that adding "ever" in the title means the question isn't
> just relevant to our current understanding of what is theoretically
> plausible but it also doesn't exclude some currently unknown theory that
> might be derived at a distant point in the future.

Or, somewhat more plausibly, some theory or technique that constitutes a minor
subfield right now but could be the next big thing in another 20-30 years, or
whenever someone just makes it really fast.

~~~
laumars
I still personally think would be the former context (current tech) rather
than the latter context ("ever") because you're discussing something that is
already working theory - even if it is just a subfield. So the question
doesn't become " _Will we ever have..._ " but rather " _When will it
become..._ ".

Ever applies more for something with an indeterminate future imo.

------
mcv
Depends on what you consider "smarter" of course. At the moment, the best AI
is much smarter at chess than any 4 year old. AI is also better at translating
text to different languages. It's better at driving cars.

But there are still many things a 4 year old can do that no AI can do.

Sometimes I feel like our AI research is leading us to consider ourselves
increasingly unintelligent rather than our computers more intelligent. It's
leading us to question what we really mean by the word "intelligence". Is it
being good at certain skills? Is it being able to acquire arbitrary skills?
Does it matter how much guidance we or the AI needs in order to learn?

~~~
jkholm
I agree with your points on AI being better at certain tasks, except of course
regarding translation. There are definitely bilingual 4-year-olds who
translate better than the state of the art machine translator.

You could maybe say a machine translator can poorly translate a greater set of
strings, including highly-technical strings. But a 4-year-old easily can
translate a greater proportion of strings _correctly_ or at least
_intelligibly_.

~~~
pen2l
Really? Because I continue to be more and more amazed by Google Translate
everyday. It gets subtleties and strange things right that I never thought a
computer translation system could get right. I'm an ESL-speaker, and I think
certainly the translation I can do to/from English to my mother language is
going to be inferior to Google Translate any day of the week, ignoring
completely the fact that I take a _LONG_ time to do translation and Google
Translate does it in a second

~~~
Foobar8568
If you need to translate what you are going to say, e.g. your thoughts are not
in your targeted language, then you are not exactly a speaker. And I can
assure you, Google translate is fairly awful. Sometimes my wife is showing me
the translation from English to French, and it would take me a while to
understand the French meaning. I won't even start on Chinese/Japanese to
English (and to French ) or the reverse.

~~~
paganel
It has been getting better but Google Translate still lacks context
(obviously) which makes it fail in some subtle but important ways. For
example, I've just run your comment through Google Translate from English to
Romanian (my native language) and Italian (which I understand pretty well) and
it does a pretty good job until your last sentence, which in Romanian it
translates as:

> Nici măcar nu voi începe pe chineză / japoneză în engleză (și în franceză)
> sau invers.

and in Italian it goes like:

> Non inizierò nemmeno dal cinese / dal giapponese all'inglese (e dal
> francese) o viceversa.

Now, this is literally a perfect translation, as "I won't even start" would
literally be translated into the Romanian "Nici măcar nu voi începe" and the
Italian "Non inizierò nemmeno", but the meaning of the English phrase is
totally different from meaning of the Romanian and Italian phrases. In English
the meaning of your last sentence goes something like this:

> Don't make me start a discussion on how badly Google Translate does the
> Chinese/Japanese to English translation

while in both Romanian and Italian the meaning of the last sentence (as run
through Google Translate) doesn't make almost any sense, because it tells us,
the auditory, that the speaker literally won't start "on/from Chinese/Japanese
to English", but we are not instructed on what the speaker will start, we have
no idea what he's referring to. Of course, this being an English-spoken forum
and us knowing English (on top of our native languages) we can understand that
Google Translate used a literal translation of "I won't even start" to our
native languages and we can make the necessary adjustment in our heads, but to
users who have no knowledge of the English language whatsoever this would
sound totally alien.

~~~
Udik
Ironically, I think the GP meant exactly what google translate translated, and
that your interpretation is wrong. He says that it takes him a long time to
translate French and he won't even start if the original text is Chinese or
Japanese. :)

Edit: check replies below

~~~
Foobar8568
I am native French, I need concentration to understand the intent on any xyz
to French Google translations. A good proof is to translate back and forth a
text. I often use German to English or Japanese to English (amazon), I can
guess the overall meaning but 60% is my own interpretation.

~~~
Udik
Completely my fault, I was skimming too fast and I mixed up pen2l's comment
with your reply, then proceeded to explain to paganel what the mix meant :)

I'd like to say it proves we're not general intelligences most of the time,
but let's not generalize: it only proves that _I_ am not a general
intelligence most of the times. :)

------
laichzeit0
I'm okay if it takes 100 or 200 years. That's like going from the year 1800 to
the year 2000. It's not a long time at all. I think it's completely doable and
a reasonable upper bound. Everyone just needs to relax a bit and let progress
run its course.

~~~
goatlover
The longer the better, as it gives society time to adapt/prepare for such a
potentially powerful technology.

------
lelima
I think we first have to scope what definition of intelligence we want to
compare:

1)having good understanding or a high mental capacity; quick to comprehend, as
persons or animals: an intelligent student.

2)displaying or characterized by quickness of understanding, sound thought, or
good judgment: an intelligent reply.

3)having the faculty of reasoning and understanding; possessing intelligence:
intelligent beings in outer space.

4)Computers. pertaining to the ability to do data processing locally; smart:
An intelligent terminal can edit input before transmission to a host computer.

Depending on the definition computers already beat humans, not only 4 years
old.

------
ballenf
The question is really whether an AI will ever be able to process all the
sensory data that a 4-year-old has (including time in the womb), plus the
input from epigenetic data resulting from the parent's environment. Plus data
from the particular gut flora inherited from the mother and influenced by the
environment.

We can see no roadblock preventing advances from getting to that point. But
the volume of data is admittedly beyond our ability to currently accurately
even quantify much less record or process.

~~~
MAXPOOL
The human retina communicates to brain with 10 Mbps. The 4-year old's brain
has experienced 9 months in a womb and 4 years outside the womb. That's less
than 150 million seconds. More than third of that while sleeping.

Assuming the full sensory data rate is 50 Mbps total (probably less), 4-year
old has received 100×10^6 × 50Mbps = 625 TB sensory information while in a
waking state.

------
Spooky23
I doubt it.

End of the day human consciousness is incredibly powerful. The whole is
greater than the sum of the parts.

~~~
ben_w
Transistors are faster than synapses to the same degree that wolves are faster
than hills.

And synergy is just as relevant for digital minds as organic ones.

~~~
Spooky23
Think about how much a gorilla has in common with a human. We’re close, but
only one species is in a zoo.

Every human gets smarter as our tools like AI get better.

~~~
ben_w
That minor differences put gorillas in zoos and humans watching them is part
of the problem.

AI can look a very long way from human right up until it beats everyone. And
even after it’s exceeded humans for decades, judging by someone I’ve seen in
the last year or so who insisted they could beat any computer at chess.

For example, imagine that there is just one thing about our minds that we
don’t understand how to replicate, which leaves our AI at gorilla-level. No
real grasp of language, poor physics model, fails mirror test, etc.

Now build into that the software behind WolframAlpha, Siri, Google Translate,
Tesla, and AlphaZero, and you have a gorilla who beats _every_ human at chess,
go, and shogi; drive about as well as the average American; knows more
languages than most people can name; can solve advanced calculus, chemistry,
physics, and economics problems; and has some limited capacity for speech.

It would still be an idiot by most people’s evaluation.

Now add that one little thing that separates us from gorillas.

(That’s assuming they still count as “tools” rather than as “people”, which is
a whole different kettle of fish).

------
objektif
The answer is yes. Ever is a very long time.

~~~
SketchySeaBeast
That's possible, but it's also why the question is useless - if we want to
draw out a line long enough where it's inevitable, it's also entirely possible
that for whatever reason human advancement stops before then.

------
freediver
4 month old performance is the barrier. If we ever get past that, else is just
a matter of time.

Darpa launched MCS challange few months ago to attempt to address this.
[https://www.darpa.mil/program/machine-common-
sense](https://www.darpa.mil/program/machine-common-sense)

------
zackkatz
Yes.

I found this article to be slanted. This quote, for example, assigns too much
creativity to a logic-based statement that I think is precisely the type of
reasoning AI exhibits.

> Four-year-olds can immediately recognize cats and understand words, but they
> can also make creative and surprising new inferences that go far beyond
> their experience. My own grandson recently explained, for example, that if
> an adult wants to become a child again, he should try not eating any healthy
> vegetables, since healthy vegetables make a child grow into an adult. This
> kind of hypothesis, a plausible one that no grown-up would ever entertain,
> is characteristic of young children.

Also, I suggest updating the title to be less click bait-y. “The difference
between bottom-up and top-down machine learning for beginners” or something
like that.

------
sgift
Since "ever" is a very long time: Probably yes. Far more interesting: Will it
be smarter in some areas that can be used for something? That already
happened. AI is very smart for specific use cases and it is useful.

------
ykevinator
The current ai paradigm is good and improving at classification. It's terrible
at synthesis (is it too risky to leave without an umbrella) and question
formulation (what do I need to know before deciding if I should leave without
an umbrella). My opinion is that the current paradigm will never get better
than a really smart dog. But, someone will come up with a new paradigm.

------
BucketSort
Once we can rigorously define what "smarter than a four-year-old" means, then
yes. But until we even know what we are talking about, no.

~~~
jobigoud
I'm not sure that is true. For example I can tell you are "smarter than a
four-year-old", even though we can't rigorously define what that means.

I would even go as far as saying that the fact that we can't rigorously define
what "smarter than a four-year-old" means is a limitation of the current
intelligence or knowledge of Humans. Part of the trajectory of machine
intelligence is independent of our own knowledge, for example through hardware
improvements. They could get past our level before we can understand it
ourselves.

~~~
BucketSort
We understand what "smart" means in relation to what one can do in the world,
but that says nothing of what it actually is. It's like if I make an analogy:
"That guy is like a dog chasing a bone." It only means something if you
understand it, it has no substance on its own. I feel the same way about the
way people use terms like "smart," "intelligent" they only mean something if
we already know what they mean, but we only know what they mean in relation to
the world, not their character independent of it.

I do see the trajectories being different, but I believe they must also
converge at some point.

Edit: Also, the documentary "Baby Geniuses" has extensive evidence that babies
are really smart, they just can't communicate how smart they are.

------
JustSomeNobody
Don't underestimate how intelligent 4 year olds are.

~~~
MagnumOpus
Don't overestimate how logical and structured 4-year-olds are. (I have one
right now, and I would trust an AI I trained for a day over the child I
trained for a day -- for just about any task.)

A 7-year-old, on the other hand, is a tall order for AI, especially once you
introduce the complex combination of skills with optical image recognition,
haptic feedback and physical principles that need broader context and/or
creativity...

------
mfritsche
Betteridge's law of headlines.

~~~
mhb
Law of Comments to Headlines to Which Betteridge's Law Apply: Useless comment
will reference all-too-familiar Betteridge's Law.

