> IMO, In the next 5 years or so, and somewhere between 0.1 and 1 ExaFlop, we'll probability hit human-level AI.
That is a mighty big prediction to just throw around so easily. Do you have any material to back that claim?
In my humble opinion, we are nowhere near general AI, let alone something on the human level. For all intents and purposes, even the most sophisticated NN models used and built these days are "dumb" when compared to biological intelligence.
It's not even clear whether general intelligence is an emergent property of all complex neural networks, or if there is some other secret sauce that is required. Having such fundamental question unanswered and yet claiming that in just 5 years we will be capable of matching the highest level intelligence we are currently aware of seems very bold to me.
Had you said something at least a bit more nebulous like 50 or a 100 years, one could take a bit of a leap of faith and posit that a massive breakthrough will happen that will give us insight into how to build proper AI, as opposed to what is curve fitting with ever more parameters.
I'm not saying that curve fitting with ever more parameters cannot eventually reach human level intelligence, but so far there is zero indication that it can.
> That is a mighty big prediction to just throw around so easily. Do you have any material to back that claim?
It's part opinion and part based on progress. Call it a highly highly educated guess.
I divide the issues related to human-level AI into three categories: Hardware, Algorithm, Biology
1) Hardware: We have the hardware. As the other comments show, the biggest supercomputers are cranking out 1 ExaFlops or more. So I consider it a solved problem.
2) Algorithm: We don't have the algorithm yet but we have made massive gains in the past 5-10 years. 5 years ago, if you asked an ML expert how soon a machine will be able to beat Go champions, and Starcraft 2 champions, they would've said something very similar to what you said: that we don't even know how to tackle this problem, and we might do it in 20 to 40 years. Today both Go and Starcraft are solved problems. Research groups like DeepMind have nothing to do but to work directly towards moving closer and closer to achieving human-level AI.
3) Biology: Even if we are completely stuck at figuring out the algorithm, we have a fallback. We look at biology. Actually it's not even a serial thing. Neuroscientists are already looking at mammalian brains, and trying to reverse engineer it, regardless of what CS/ML/AI folks are doing. We have already mapped the full connectome of the human brain [1]. We know very well how biological neurons work, barring some qualifications regarding quantum behavior. We have already simulated, or in the process of simulating, CNS of a worm [2]. Once we full simulate the brain of a mouse, I don't think we'd be very far from scaling it up.
That is a mighty big prediction to just throw around so easily. Do you have any material to back that claim?
In my humble opinion, we are nowhere near general AI, let alone something on the human level. For all intents and purposes, even the most sophisticated NN models used and built these days are "dumb" when compared to biological intelligence.
It's not even clear whether general intelligence is an emergent property of all complex neural networks, or if there is some other secret sauce that is required. Having such fundamental question unanswered and yet claiming that in just 5 years we will be capable of matching the highest level intelligence we are currently aware of seems very bold to me.
Had you said something at least a bit more nebulous like 50 or a 100 years, one could take a bit of a leap of faith and posit that a massive breakthrough will happen that will give us insight into how to build proper AI, as opposed to what is curve fitting with ever more parameters.
I'm not saying that curve fitting with ever more parameters cannot eventually reach human level intelligence, but so far there is zero indication that it can.