Hacker News new | past | comments | ask | show | jobs | submit login
Will A.I. Ever Be Smarter Than a Four-Year-Old? (smithsonianmag.com)
43 points by nabla9 20 days ago | hide | past | web | favorite | 72 comments



"Ever" is not a useful word here. It's like saying "Will four year olds ever be grown in pods?"

A better headline might be "Can AI learn as flexibly as a four year old with today's known techniques?", to which the answer is of course "No".


Those sound like different questions. The original is asking if it is theoretically plausible. I don’t think that’s out of line.


The OP's point was that adding "ever" in the title means the question isn't just relevant to our current understanding of what is theoretically plausible but it also doesn't exclude some currently unknown theory that might be derived at a distant point in the future.

While I do see - and to an extent agree with - their point, I there is a lot more wrong with that heading than just the lack of a contextual end date.


> The OP's point was that adding "ever" in the title means the question isn't just relevant to our current understanding of what is theoretically plausible but it also doesn't exclude some currently unknown theory that might be derived at a distant point in the future.

Or, somewhat more plausibly, some theory or technique that constitutes a minor subfield right now but could be the next big thing in another 20-30 years, or whenever someone just makes it really fast.


I still personally think would be the former context (current tech) rather than the latter context ("ever") because you're discussing something that is already working theory - even if it is just a subfield. So the question doesn't become "Will we ever have..." but rather "When will it become...".

Ever applies more for something with an indeterminate future imo.


To be fair, the article did little to nothing to address if it's theoretically plausible. Just that we are far away at the moment.


When the question is literally 'ever', the only thing that matters is the relative rate of improvement. Do computers get better at "learning" faster than young kids, e.g. comparing kids 20 years ago and now with computers 20 years ago and now? It seems to me that computers got a lot better while child development remained essentially the same. Unless that will change, given time—infinity should be enough—eventually computers will be "smarter".


Right, the title of the article is inane.

Brains just process information. Computers process information and we continue to make them better at it. Eventually, computers will process information at least as well as brains do for various definitions of "well".

https://www.ted.com/talks/sam_harris_can_we_build_ai_without...


That seems like a very reductionist way to look at it.

There is absolutely no guarantee that consciousness and general intelligence is an emergent property of any sufficiently capable information processing system. It feels intuitive that it would be, but so far we only have biological examples and no artificial ones.

Even the most sophisticated artificial neural networks we can build today are nothing more than curve fitting with lots of parameters. Perhaps that's all it takes, and with enough parameters, curve fitting can give rise to general AI, but I'm seeing very bold predictions on HN every time this comes up and none of them are actually backed by anything tangible.


Watch the video that I linked. Your questions/issues are answered or obviated by it.


Can you please point out where? Because I did watch this video in the past and I know about Sam Harris and his views. He addresses _none_ of my points and provides _zero_ proof for any of your claims. He makes interesting claims and proposes interesting outcomes given assumptions that are far from certain.

Can you provide any study that lays out a path with well researched and supported claims and assumptions from where we are now, to artificial general intelligence?

A runaway positive-feedback loop of continuously (self-)improving machines is a _possibility_ but by no means an inevitability. We don't even know what consciousness really is, what gives rise to it. To make a claim that it is inevitable that we'll be able to create general intelligence seems like an absurdly bold claim to make.

Again, I'm not saying it is impossible, but it seems hubris to me to declare that it is certain, when we only have vague and nebulous predictions to work on.


"Can you please point out where?"

Your issues/questions indicate that you either aren't familiar with the talk or you disagree with the basic assumptions that he outlines. Questions like "Can you provide any study that lays out a path with well researched" are nonsensical given that we obviously aren't there yet.

If you disagree with his stated assumptions (which I'm not just going to repeat here since the video is short), then we can just agree to disagree because I find them compelling.


Ah but now we are talking about opinions and assumptions. You wrote that the linked talk will answer or obviate my questions, which it didn't. It made some fairly wild predictions with little actual evidence. The predictions are interesting for sure, but I like to see more robust support for such ambitious claims.


You're just being argumentative.

There's nothing you can predict that isn't built upon some assumptions. "When I flip this light switch, the lights will go on like they have the last 10,000 times I did so." Is a prediction built upon assumptions that power is still running, that the light bulb hasn't burnt out, and that physics will continue working the way it has up to this point.

You need to consider the strength of the assumptions in order to evaluate the predictions. If you consider those assumptions to not be fairly solid... whatever. But just handwaving that because "we are talking about opinions and assumptions" somehow gives your disbelief credibility worth exploring is a non-starter.


how can we know that computers process information?


Depends on what you consider "smarter" of course. At the moment, the best AI is much smarter at chess than any 4 year old. AI is also better at translating text to different languages. It's better at driving cars.

But there are still many things a 4 year old can do that no AI can do.

Sometimes I feel like our AI research is leading us to consider ourselves increasingly unintelligent rather than our computers more intelligent. It's leading us to question what we really mean by the word "intelligence". Is it being good at certain skills? Is it being able to acquire arbitrary skills? Does it matter how much guidance we or the AI needs in order to learn?


the best AI is much smarter at chess than any 4 year old

An AI is not "smarter at chess" than a 4 year old any more than a calculator is "smarter at arithmetic" than a 4 year old or a car is "smarter at moving fast" than a 4 year old. A modern AI program is simply a computational statistics program crunching some numbers. It has no "smarts" or "understanding" of the game in any meaningful sense. The fact that it can beat a 4 year old at the game of chess is unsurprising and not particularly meaningful.

We have no idea what it takes to get general intelligence and/or understanding. Until we do, comparing computational statistics programs to 4 year old children is rather a meaningless exercise, just as Turing and Djikstra told us decades ago:

"The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim."


I feel like that with that line of reasoning you could also say that a 34-year-old isn't smarter than a 4-year-old. The 34-year-old just has a bigger dataset.


The changes I've seen in the past year go far beyond what could be explained by a "bigger dataset" alone.

Her brain processing is changing on the most fundamental level. Her ability to focus on a task has improved massively. Far less random gibberish is coming out of her. She is now capable of thinking how others would feel in her shoes and now plans for the future. Her random output isn't a Markov chain with a better dataset. The input from our world is changing the wiring up there in a way that no ML model is changed by its data.

As someone who has learned a 2nd language as a child and a 3rd as an adult, I also know personally that it's far more than the dataset alone.

EDIT: updated comment after re-reading the chain.


I wasn't arguing that was the case but felt like the parent's logic would extend to that conclusion.

I have three young children and I have observed the same as you with your niece. However a lot of it is that as children learn cause and effect they are better able to optimize for their desired outcomes.

"smarter" is probably a poor choice of words. "more capable" at certain tasks then definitely. The sibling comment to this talks about identifying Legos. My four-year-old can do that because they now know Plato's form of the Lego because they observed me saying the word Lego while holding them to understand what that is. With enough observations I would think that a general purpose AI could identify the forms and then launch more specific AIs for the tasks.


A 4 year old has "general purpose intelligence". It's an "algorithm" that can learn (from) anything. Can the translating AI solve a simple puzzle? No, because it is only trained for a specific and narrow activity. The same for most other specialized AIs. Alpha Zero can beat anyone at chess. But ask it to identify LEGO toys and it will fail.

Most devices we use are far better at doing their job than a human, which is why we use them. But even if they learn and improve what they're doing they are still only able to do that one single thing.

Also I noticed above that the misconception still persists that autonomous driving today is safer than human driving although it has been proven wrong [0]. It will come at some point but we're not there yet.

[0] https://arstechnica.com/cars/2019/02/in-2017-the-feds-said-t...


The distinction is apples and oranges. A 4 year old is capable of creating explanatory knowledge about the world. A 34 year old is better at doing that. All current computational statistics programs are utterly incapable of anything even remotely approaching this capability.


See? That line of reasoning is exactly what I mean when I say we're moving the goalposts on what it means to be intelligent.

Ultimately our brains also fuzzy statistics and arithmetic on the signals going to our brains, but we create intelligent results from that. Is it suddenly not intelligent anymore once you know how the sausage is made?


Not at all. The goalposts are the same as they always were. If we want intelligent machines (the kind capable of creativity, that is, the ability to create new explanatory knowledge about reality), then we will need to see evidence of this creativity. So far no AI has even remotely approached this, because, among other things, no AI is capable of explaining anything at all.


No, the goalposts have definitely moved. Playing chess used to be a sign of intelligence, and for that reason, game playing has always been a big focus for AI research. Search algorithms in general have always been part of AI research.

They may not be self-learning, but not all AI is. And modern chess and go AI is self-learning and scarily good at it.


You don't even hear about the Turing test anymore. The orignal turing test was an AI algorithm fooling a human that it itself is human for ~5 minutes. One.

The reason you don't hear it anymore is that it's obvious that algorithms are doing this on the phone for commercial purposes constantly.

It's ridiculous how far the goalposts have moved.


The goalposts are exactly where I said they are. They have always been there for people that actually understand the nature of the problem. The fact that some people were spectacularly and obviously wrong about the nature of chess programs is irrelevant.


Now you're just trying to sound pompous in an attempt to seem authoritative, but you're really not saying anything meaningful here. Where are those goalposts according to you? Who are those "people that actually understand the nature of the problem"? Which people are "spectacularly and obviously wrong about the nature of chess programs"?

It is AI researchers themselves who decided that chess and game-playing in general were interesting to research. Everybody used to consider skill at chess a sign of intelligence. There have been many, many problems that have long been complex problems for AI to tackle, and that have only recently struck people as being not really about intelligence. Creativity was barely on the table for many decades, and has only recently come within reach of AI research.

I wonder if maybe you're only looking at the last 20 years, and ignore the half century before that.


I agree with your points on AI being better at certain tasks, except of course regarding translation. There are definitely bilingual 4-year-olds who translate better than the state of the art machine translator.

You could maybe say a machine translator can poorly translate a greater set of strings, including highly-technical strings. But a 4-year-old easily can translate a greater proportion of strings correctly or at least intelligibly.


Really? Because I continue to be more and more amazed by Google Translate everyday. It gets subtleties and strange things right that I never thought a computer translation system could get right. I'm an ESL-speaker, and I think certainly the translation I can do to/from English to my mother language is going to be inferior to Google Translate any day of the week, ignoring completely the fact that I take a _LONG_ time to do translation and Google Translate does it in a second


If you need to translate what you are going to say, e.g. your thoughts are not in your targeted language, then you are not exactly a speaker. And I can assure you, Google translate is fairly awful. Sometimes my wife is showing me the translation from English to French, and it would take me a while to understand the French meaning. I won't even start on Chinese/Japanese to English (and to French ) or the reverse.


It has been getting better but Google Translate still lacks context (obviously) which makes it fail in some subtle but important ways. For example, I've just run your comment through Google Translate from English to Romanian (my native language) and Italian (which I understand pretty well) and it does a pretty good job until your last sentence, which in Romanian it translates as:

> Nici măcar nu voi începe pe chineză / japoneză în engleză (și în franceză) sau invers.

and in Italian it goes like:

> Non inizierò nemmeno dal cinese / dal giapponese all'inglese (e dal francese) o viceversa.

Now, this is literally a perfect translation, as "I won't even start" would literally be translated into the Romanian "Nici măcar nu voi începe" and the Italian "Non inizierò nemmeno", but the meaning of the English phrase is totally different from meaning of the Romanian and Italian phrases. In English the meaning of your last sentence goes something like this:

> Don't make me start a discussion on how badly Google Translate does the Chinese/Japanese to English translation

while in both Romanian and Italian the meaning of the last sentence (as run through Google Translate) doesn't make almost any sense, because it tells us, the auditory, that the speaker literally won't start "on/from Chinese/Japanese to English", but we are not instructed on what the speaker will start, we have no idea what he's referring to. Of course, this being an English-spoken forum and us knowing English (on top of our native languages) we can understand that Google Translate used a literal translation of "I won't even start" to our native languages and we can make the necessary adjustment in our heads, but to users who have no knowledge of the English language whatsoever this would sound totally alien.


Ironically, I think the GP meant exactly what google translate translated, and that your interpretation is wrong. He says that it takes him a long time to translate French and he won't even start if the original text is Chinese or Japanese. :)

Edit: check replies below


I am native French, I need concentration to understand the intent on any xyz to French Google translations. A good proof is to translate back and forth a text. I often use German to English or Japanese to English (amazon), I can guess the overall meaning but 60% is my own interpretation.


Completely my fault, I was skimming too fast and I mixed up pen2l's comment with your reply, then proceeded to explain to paganel what the mix meant :)

I'd like to say it proves we're not general intelligences most of the time, but let's not generalize: it only proves that I am not a general intelligence most of the times. :)


I'd guess we'll have to wait for him to explain :) Anyway, translations and what gets lost in translation can be fun pretty fun.


See my reply to your sibling comment :/


We're comparing to a 4 year old, though. 4 year olds don't know many idioms yet, and don't know quite a lot of the words you used.

In fact, many adults are laughably bad at translating idioms. There are even books translated by professional translators that mess them up.


It is also absolutely atrocious at English -> German. Deepl.com translator is considerably better, but still very far from perfect.


Learning a language as an adult is a different story.

This isn't a no-true-scotsman, either, because I posit that there are quite a lot of 4-year-olds who match my description.


I don't think any 4 year old can do better at French<>English translation than what Google Translate does. It's a no contest here. Even at 9 year old kids tend to make huge mistakes when it comes to their maternal tongue, fudging up words or creative spelling.


I'm willing to believe that 4 year olds can be better at understanding texts in two different languages[0], but I've yet to see a 4 year old translate a piece of text from one language to another.

[0] Though I have no idea how you'd measure 'understanding'.


"Sometimes I feel like our AI research is leading us to consider ourselves increasingly unintelligent rather than our computers more intelligent."

I've come to the conclusion you can get a long ways by modeling humans as the minimally-intelligent species necessary to produce our current civilization. Whatever the space of "intelligent species" may look like, we almost by definition have to be at the simple end, rather than the intelligent one.

I think most people who might say that would mean it in a rather misanthropic or hopeless sense; I don't. I have more of an "it is what it is" attitude about it. I am, after all, one of the just-barely-intelligent-enough myself and hardly in a position to make sweeping declarations of what a higher intelligence might say is the obvious things we should be doing next. I'd also observe that the "minimal intelligence to produce our current civilization" is still non-trivial; don't underestimate the complexity of our current world.

But still, yes, on the one hand it is absolutely true that humans are quite amazing and we seem to be a rather long ways away from matching some of their feats, but, on the other hand, we are also a long ways from even how we fancy ourselves in our collective self-image, let alone close to what are the limits of what is possible even in our universe.


Obviously a trained translation system is better than a four year old that doesn't speak a different language, but a four year old is probably better at translating language she does speak.

It's actually quite amazing how bad computers are at language considering the more than half century of research in the area. I tried to set a reminder in Siri: "Pull the briefs." Garbage. After several tries, I tried "download the briefs." Garbage. Siri couldn't even recognize the words I was saying, much less actually do the thing I asked.


> a four year old is probably better at translating language she does speak.

> "Pull the briefs." Garbage. After several tries, I tried "download the briefs.”

I have no idea what you mean by the first sentence, and would never guess that it was the same meaning as the second sentence if you hadn’t told me.

English is my native language, and my first assumption was the first sentence meant “remove the underwear from sale”.

Language is hard. So hard that I suspect current AI has a comparable understanding of natural language to a six year old (sure, if the six year old spoke 26 languages to the level of a normal six year old, but that’s an easy difference for a computer).

Comparably, I have moved to Berlin, and on paper my German vocabulary is about the size of a child’s vocabulary. I have difficulty parsing word boundaries and correctly recognising even those words which I do know when they’re spoken outside lesson environments.


> I have no idea what you mean by the first sentence, and would never guess that it was the same meaning as the second sentence if you hadn’t told me.

But I’m sure you could’ve written down the literal phrase, which is all I asked Siri to do (Siri doesn’t do anything with a reminder other than parse it for dates and times). (Incidentally, my six year old would understand what I mean by brief—my wife and I are both lawyers and even a child is better at context than a computer.)


In English sure, but I don’t have a six-year-old’s grasp of English.

Given my childlike comprehsion of German, a better comparison would be if I heard the German translation (“Zieh die Slips”, says Google, making exactly the same incorrect assumption I would’ve made). If someone asked me to transcribe “Zieh die Slips”, I would probably write “Sie die Slips” or “Sehe dies Lips”.


The distinguishing factor is considered being a general intelligence. I.e. the ability to learn solving new tasks in many different problem domains and adapting while applying it.


Will a submarine ever be a better swimmer than Michael Phelps?

Not only do each have their pros and cons when traversing water, but I'm not sure if a submarine swims anyway.

I think it's more than just semantics. And I think we're not close to such a submarine or such a computer because we've been doing other things with both mostly.


I'm okay if it takes 100 or 200 years. That's like going from the year 1800 to the year 2000. It's not a long time at all. I think it's completely doable and a reasonable upper bound. Everyone just needs to relax a bit and let progress run its course.


The longer the better, as it gives society time to adapt/prepare for such a potentially powerful technology.


I think we first have to scope what definition of intelligence we want to compare:

1)having good understanding or a high mental capacity; quick to comprehend, as persons or animals: an intelligent student.

2)displaying or characterized by quickness of understanding, sound thought, or good judgment: an intelligent reply.

3)having the faculty of reasoning and understanding; possessing intelligence: intelligent beings in outer space.

4)Computers. pertaining to the ability to do data processing locally; smart: An intelligent terminal can edit input before transmission to a host computer.

Depending on the definition computers already beat humans, not only 4 years old.


The question is really whether an AI will ever be able to process all the sensory data that a 4-year-old has (including time in the womb), plus the input from epigenetic data resulting from the parent's environment. Plus data from the particular gut flora inherited from the mother and influenced by the environment.

We can see no roadblock preventing advances from getting to that point. But the volume of data is admittedly beyond our ability to currently accurately even quantify much less record or process.


The human retina communicates to brain with 10 Mbps. The 4-year old's brain has experienced 9 months in a womb and 4 years outside the womb. That's less than 150 million seconds. More than third of that while sleeping.

Assuming the full sensory data rate is 50 Mbps total (probably less), 4-year old has received 100×10^6 × 50Mbps = 625 TB sensory information while in a waking state.


I doubt it.

End of the day human consciousness is incredibly powerful. The whole is greater than the sum of the parts.


Transistors are faster than synapses to the same degree that wolves are faster than hills.

And synergy is just as relevant for digital minds as organic ones.


Think about how much a gorilla has in common with a human. We’re close, but only one species is in a zoo.

Every human gets smarter as our tools like AI get better.


That minor differences put gorillas in zoos and humans watching them is part of the problem.

AI can look a very long way from human right up until it beats everyone. And even after it’s exceeded humans for decades, judging by someone I’ve seen in the last year or so who insisted they could beat any computer at chess.

For example, imagine that there is just one thing about our minds that we don’t understand how to replicate, which leaves our AI at gorilla-level. No real grasp of language, poor physics model, fails mirror test, etc.

Now build into that the software behind WolframAlpha, Siri, Google Translate, Tesla, and AlphaZero, and you have a gorilla who beats every human at chess, go, and shogi; drive about as well as the average American; knows more languages than most people can name; can solve advanced calculus, chemistry, physics, and economics problems; and has some limited capacity for speech.

It would still be an idiot by most people’s evaluation.

Now add that one little thing that separates us from gorillas.

(That’s assuming they still count as “tools” rather than as “people”, which is a whole different kettle of fish).


The answer is yes. Ever is a very long time.


That's possible, but it's also why the question is useless - if we want to draw out a line long enough where it's inevitable, it's also entirely possible that for whatever reason human advancement stops before then.


4 month old performance is the barrier. If we ever get past that, else is just a matter of time.

Darpa launched MCS challange few months ago to attempt to address this. https://www.darpa.mil/program/machine-common-sense


Yes.

I found this article to be slanted. This quote, for example, assigns too much creativity to a logic-based statement that I think is precisely the type of reasoning AI exhibits.

> Four-year-olds can immediately recognize cats and understand words, but they can also make creative and surprising new inferences that go far beyond their experience. My own grandson recently explained, for example, that if an adult wants to become a child again, he should try not eating any healthy vegetables, since healthy vegetables make a child grow into an adult. This kind of hypothesis, a plausible one that no grown-up would ever entertain, is characteristic of young children.

Also, I suggest updating the title to be less click bait-y. “The difference between bottom-up and top-down machine learning for beginners” or something like that.


Since "ever" is a very long time: Probably yes. Far more interesting: Will it be smarter in some areas that can be used for something? That already happened. AI is very smart for specific use cases and it is useful.


The current ai paradigm is good and improving at classification. It's terrible at synthesis (is it too risky to leave without an umbrella) and question formulation (what do I need to know before deciding if I should leave without an umbrella). My opinion is that the current paradigm will never get better than a really smart dog. But, someone will come up with a new paradigm.


Once we can rigorously define what "smarter than a four-year-old" means, then yes. But until we even know what we are talking about, no.


I'm not sure that is true. For example I can tell you are "smarter than a four-year-old", even though we can't rigorously define what that means.

I would even go as far as saying that the fact that we can't rigorously define what "smarter than a four-year-old" means is a limitation of the current intelligence or knowledge of Humans. Part of the trajectory of machine intelligence is independent of our own knowledge, for example through hardware improvements. They could get past our level before we can understand it ourselves.


We understand what "smart" means in relation to what one can do in the world, but that says nothing of what it actually is. It's like if I make an analogy: "That guy is like a dog chasing a bone." It only means something if you understand it, it has no substance on its own. I feel the same way about the way people use terms like "smart," "intelligent" they only mean something if we already know what they mean, but we only know what they mean in relation to the world, not their character independent of it.

I do see the trajectories being different, but I believe they must also converge at some point.

Edit: Also, the documentary "Baby Geniuses" has extensive evidence that babies are really smart, they just can't communicate how smart they are.


Don't underestimate how intelligent 4 year olds are.


Don't overestimate how logical and structured 4-year-olds are. (I have one right now, and I would trust an AI I trained for a day over the child I trained for a day -- for just about any task.)

A 7-year-old, on the other hand, is a tall order for AI, especially once you introduce the complex combination of skills with optical image recognition, haptic feedback and physical principles that need broader context and/or creativity...


Betteridge's law of headlines.


Law of Comments to Headlines to Which Betteridge's Law Apply: Useless comment will reference all-too-familiar Betteridge's Law.


[flagged]


= sha1("The title of the article will distract HN readers from the content.")




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: