A better headline might be "Can AI learn as flexibly as a four year old with today's known techniques?", to which the answer is of course "No".
While I do see - and to an extent agree with - their point, I there is a lot more wrong with that heading than just the lack of a contextual end date.
Or, somewhat more plausibly, some theory or technique that constitutes a minor subfield right now but could be the next big thing in another 20-30 years, or whenever someone just makes it really fast.
Ever applies more for something with an indeterminate future imo.
Brains just process information. Computers process information and we continue to make them better at it. Eventually, computers will process information at least as well as brains do for various definitions of "well".
There is absolutely no guarantee that consciousness and general intelligence is an emergent property of any sufficiently capable information processing system. It feels intuitive that it would be, but so far we only have biological examples and no artificial ones.
Even the most sophisticated artificial neural networks we can build today are nothing more than curve fitting with lots of parameters. Perhaps that's all it takes, and with enough parameters, curve fitting can give rise to general AI, but I'm seeing very bold predictions on HN every time this comes up and none of them are actually backed by anything tangible.
Can you provide any study that lays out a path with well researched and supported claims and assumptions from where we are now, to artificial general intelligence?
A runaway positive-feedback loop of continuously (self-)improving machines is a _possibility_ but by no means an inevitability. We don't even know what consciousness really is, what gives rise to it. To make a claim that it is inevitable that we'll be able to create general intelligence seems like an absurdly bold claim to make.
Again, I'm not saying it is impossible, but it seems hubris to me to declare that it is certain, when we only have vague and nebulous predictions to work on.
Your issues/questions indicate that you either aren't familiar with the talk or you disagree with the basic assumptions that he outlines. Questions like "Can you provide any study that lays out a path with well researched" are nonsensical given that we obviously aren't there yet.
If you disagree with his stated assumptions (which I'm not just going to repeat here since the video is short), then we can just agree to disagree because I find them compelling.
There's nothing you can predict that isn't built upon some assumptions. "When I flip this light switch, the lights will go on like they have the last 10,000 times I did so." Is a prediction built upon assumptions that power is still running, that the light bulb hasn't burnt out, and that physics will continue working the way it has up to this point.
You need to consider the strength of the assumptions in order to evaluate the predictions. If you consider those assumptions to not be fairly solid... whatever. But just handwaving that because "we are talking about opinions and assumptions" somehow gives your disbelief credibility worth exploring is a non-starter.
But there are still many things a 4 year old can do that no AI can do.
Sometimes I feel like our AI research is leading us to consider ourselves increasingly unintelligent rather than our computers more intelligent. It's leading us to question what we really mean by the word "intelligence". Is it being good at certain skills? Is it being able to acquire arbitrary skills? Does it matter how much guidance we or the AI needs in order to learn?
An AI is not "smarter at chess" than a 4 year old any more than a calculator is "smarter at arithmetic" than a 4 year old or a car is "smarter at moving fast" than a 4 year old. A modern AI program is simply a computational statistics program crunching some numbers. It has no "smarts" or "understanding" of the game in any meaningful sense. The fact that it can beat a 4 year old at the game of chess is unsurprising and not particularly meaningful.
We have no idea what it takes to get general intelligence and/or understanding. Until we do, comparing computational statistics programs to 4 year old children is rather a meaningless exercise, just as Turing and Djikstra told us decades ago:
"The question of whether Machines Can Think... is about as relevant as the question of whether Submarines Can Swim."
Her brain processing is changing on the most fundamental level. Her ability to focus on a task has improved massively. Far less random gibberish is coming out of her. She is now capable of thinking how others would feel in her shoes and now plans for the future. Her random output isn't a Markov chain with a better dataset. The input from our world is changing the wiring up there in a way that no ML model is changed by its data.
As someone who has learned a 2nd language as a child and a 3rd as an adult, I also know personally that it's far more than the dataset alone.
EDIT: updated comment after re-reading the chain.
I have three young children and I have observed the same as you with your niece. However a lot of it is that as children learn cause and effect they are better able to optimize for their desired outcomes.
"smarter" is probably a poor choice of words. "more capable" at certain tasks then definitely. The sibling comment to this talks about identifying Legos. My four-year-old can do that because they now know Plato's form of the Lego because they observed me saying the word Lego while holding them to understand what that is. With enough observations I would think that a general purpose AI could identify the forms and then launch more specific AIs for the tasks.
Most devices we use are far better at doing their job than a human, which is why we use them. But even if they learn and improve what they're doing they are still only able to do that one single thing.
Also I noticed above that the misconception still persists that autonomous driving today is safer than human driving although it has been proven wrong . It will come at some point but we're not there yet.
Ultimately our brains also fuzzy statistics and arithmetic on the signals going to our brains, but we create intelligent results from that. Is it suddenly not intelligent anymore once you know how the sausage is made?
They may not be self-learning, but not all AI is. And modern chess and go AI is self-learning and scarily good at it.
The reason you don't hear it anymore is that it's obvious that algorithms are doing this on the phone for commercial purposes constantly.
It's ridiculous how far the goalposts have moved.
It is AI researchers themselves who decided that chess and game-playing in general were interesting to research. Everybody used to consider skill at chess a sign of intelligence. There have been many, many problems that have long been complex problems for AI to tackle, and that have only recently struck people as being not really about intelligence. Creativity was barely on the table for many decades, and has only recently come within reach of AI research.
I wonder if maybe you're only looking at the last 20 years, and ignore the half century before that.
You could maybe say a machine translator can poorly translate a greater set of strings, including highly-technical strings. But a 4-year-old easily can translate a greater proportion of strings correctly or at least intelligibly.
> Nici măcar nu voi începe pe chineză / japoneză în engleză (și în franceză) sau invers.
and in Italian it goes like:
> Non inizierò nemmeno dal cinese / dal giapponese all'inglese (e dal francese) o viceversa.
Now, this is literally a perfect translation, as "I won't even start" would literally be translated into the Romanian "Nici măcar nu voi începe" and the Italian "Non inizierò nemmeno", but the meaning of the English phrase is totally different from meaning of the Romanian and Italian phrases. In English the meaning of your last sentence goes something like this:
> Don't make me start a discussion on how badly Google Translate does the Chinese/Japanese to English translation
while in both Romanian and Italian the meaning of the last sentence (as run through Google Translate) doesn't make almost any sense, because it tells us, the auditory, that the speaker literally won't start "on/from Chinese/Japanese to English", but we are not instructed on what the speaker will start, we have no idea what he's referring to. Of course, this being an English-spoken forum and us knowing English (on top of our native languages) we can understand that Google Translate used a literal translation of "I won't even start" to our native languages and we can make the necessary adjustment in our heads, but to users who have no knowledge of the English language whatsoever this would sound totally alien.
Edit: check replies below
I'd like to say it proves we're not general intelligences most of the time, but let's not generalize: it only proves that I am not a general intelligence most of the times. :)
In fact, many adults are laughably bad at translating idioms. There are even books translated by professional translators that mess them up.
This isn't a no-true-scotsman, either, because I posit that there are quite a lot of 4-year-olds who match my description.
 Though I have no idea how you'd measure 'understanding'.
I've come to the conclusion you can get a long ways by modeling humans as the minimally-intelligent species necessary to produce our current civilization. Whatever the space of "intelligent species" may look like, we almost by definition have to be at the simple end, rather than the intelligent one.
I think most people who might say that would mean it in a rather misanthropic or hopeless sense; I don't. I have more of an "it is what it is" attitude about it. I am, after all, one of the just-barely-intelligent-enough myself and hardly in a position to make sweeping declarations of what a higher intelligence might say is the obvious things we should be doing next. I'd also observe that the "minimal intelligence to produce our current civilization" is still non-trivial; don't underestimate the complexity of our current world.
But still, yes, on the one hand it is absolutely true that humans are quite amazing and we seem to be a rather long ways away from matching some of their feats, but, on the other hand, we are also a long ways from even how we fancy ourselves in our collective self-image, let alone close to what are the limits of what is possible even in our universe.
It's actually quite amazing how bad computers are at language considering the more than half century of research in the area. I tried to set a reminder in Siri: "Pull the briefs." Garbage. After several tries, I tried "download the briefs." Garbage. Siri couldn't even recognize the words I was saying, much less actually do the thing I asked.
> "Pull the briefs." Garbage. After several tries, I tried "download the briefs.”
I have no idea what you mean by the first sentence, and would never guess that it was the same meaning as the second sentence if you hadn’t told me.
English is my native language, and my first assumption was the first sentence meant “remove the underwear from sale”.
Language is hard. So hard that I suspect current AI has a comparable understanding of natural language to a six year old (sure, if the six year old spoke 26 languages to the level of a normal six year old, but that’s an easy difference for a computer).
Comparably, I have moved to Berlin, and on paper my German vocabulary is about the size of a child’s vocabulary. I have difficulty parsing word boundaries and correctly recognising even those words which I do know when they’re spoken outside lesson environments.
But I’m sure you could’ve written down the literal phrase, which is all I asked Siri to do (Siri doesn’t do anything with a reminder other than parse it for dates and times). (Incidentally, my six year old would understand what I mean by brief—my wife and I are both lawyers and even a child is better at context than a computer.)
Given my childlike comprehsion of German, a better comparison would be if I heard the German translation (“Zieh die Slips”, says Google, making exactly the same incorrect assumption I would’ve made). If someone asked me to transcribe “Zieh die Slips”, I would probably write “Sie die Slips” or “Sehe dies Lips”.
Not only do each have their pros and cons when traversing water, but I'm not sure if a submarine swims anyway.
I think it's more than just semantics. And I think we're not close to such a submarine or such a computer because we've been doing other things with both mostly.
1)having good understanding or a high mental capacity; quick to comprehend, as persons or animals:
an intelligent student.
2)displaying or characterized by quickness of understanding, sound thought, or good judgment:
an intelligent reply.
3)having the faculty of reasoning and understanding; possessing intelligence:
intelligent beings in outer space.
4)Computers. pertaining to the ability to do data processing locally; smart:
An intelligent terminal can edit input before transmission to a host computer.
Depending on the definition computers already beat humans, not only 4 years old.
We can see no roadblock preventing advances from getting to that point. But the volume of data is admittedly beyond our ability to currently accurately even quantify much less record or process.
Assuming the full sensory data rate is 50 Mbps total (probably less), 4-year old has received 100×10^6 × 50Mbps = 625 TB sensory information while in a waking state.
End of the day human consciousness is incredibly powerful. The whole is greater than the sum of the parts.
And synergy is just as relevant for digital minds as organic ones.
Every human gets smarter as our tools like AI get better.
AI can look a very long way from human right up until it beats everyone. And even after it’s exceeded humans for decades, judging by someone I’ve seen in the last year or so who insisted they could beat any computer at chess.
For example, imagine that there is just one thing about our minds that we don’t understand how to replicate, which leaves our AI at gorilla-level. No real grasp of language, poor physics model, fails mirror test, etc.
Now build into that the software behind WolframAlpha, Siri, Google Translate, Tesla, and AlphaZero, and you have a gorilla who beats every human at chess, go, and shogi; drive about as well as the average American; knows more languages than most people can name; can solve advanced calculus, chemistry, physics, and economics problems; and has some limited capacity for speech.
It would still be an idiot by most people’s evaluation.
Now add that one little thing that separates us from gorillas.
(That’s assuming they still count as “tools” rather than as “people”, which is a whole different kettle of fish).
Darpa launched MCS challange few months ago to attempt to address this. https://www.darpa.mil/program/machine-common-sense
I found this article to be slanted. This quote, for example, assigns too much creativity to a logic-based statement that I think is precisely the type of reasoning AI exhibits.
> Four-year-olds can immediately recognize cats and understand words, but they can also make creative and surprising new inferences that go far beyond their experience. My own grandson recently explained, for example, that if an adult wants to become a child again, he should try not eating any healthy vegetables, since healthy vegetables make a child grow into an adult. This kind of hypothesis, a plausible one that no grown-up would ever entertain, is characteristic of young children.
Also, I suggest updating the title to be less click bait-y. “The difference between bottom-up and top-down machine learning for beginners” or something like that.
I would even go as far as saying that the fact that we can't rigorously define what "smarter than a four-year-old" means is a limitation of the current intelligence or knowledge of Humans. Part of the trajectory of machine intelligence is independent of our own knowledge, for example through hardware improvements. They could get past our level before we can understand it ourselves.
I do see the trajectories being different, but I believe they must also converge at some point.
Edit: Also, the documentary "Baby Geniuses" has extensive evidence that babies are really smart, they just can't communicate how smart they are.
A 7-year-old, on the other hand, is a tall order for AI, especially once you introduce the complex combination of skills with optical image recognition, haptic feedback and physical principles that need broader context and/or creativity...