Even with AlphaGo the hype was insane, but was mostly caused by people confusing weak AI with strong AI with Arnold Schwarzenegger. Any material that intentionally plays on this confusion is arguably false advertising and fraudulent.
The truth is, it's still impossible for anyone to talk about strong AI, because we have hardly been able to define what it even is. We just know we have it, and it's still unlike anything anyone is working on that has made it to the public. The people who write the papers absolutely know this. It's a common goal, but we have yet to engineer anything that remotely resembles strong AI.
For the most part, we're either still busy figuring out small but hard problems, or hacking resemblance and avoiding the important problems altogether.
If I'm trying to solve problem X and writing a paper about a new, better AI/ML approach to do it - this is most likely not even in the direction of strong AI. In fact, the new and improved solution is quite likely to be more specialized to the problem and thus even further from a general AI.
If I could magically build a strong general AI then it most likely wouldn't be an improved or even a good solution to the problem I'm looking at - it would be a horribly inefficient overkill, something comparable to using a human brain to perform arithmetic calculations - it can do it, but less accurately and more wastefully than a simple calculator.
It takes years of training before humans can demonstrate human level intelligence. Unless the first strong AI is super human it's not going to look like a strong AI for several years.
It takes years of training before humans can be trusted with pants. Really, we should lower our expectations of AIs by three or four orders of magnitude. If they can walk in less than a year of trying, then they win.
Yes, humans are also very hyped organisms. Lets dispel the notion that a defining trait of humans is producing or processing highly complex language patterns.
Engineering requires understanding. If we're randomly mixing potions then we may get a result without any understanding, but there's a name for that, and it's called magic.
Strong AI will most certainly appear magical, but any technology with this level of sophistication will require focused, intentional, intelligent effort. The person who accomplishes it will know exactly what they were doing.
This mystical emergence of intelligence is also somewhat metaphysical and unscientific. There are no ghosts in shells, and intelligence doesn't just rise from bare metal. If we were to ask if someone could invent bitcoin and not know it, it'd be a joke. Yet, with AI, there are those who still intuitively argue for some form of emergence. But across the board, these claims are made with little or no understanding of what underlies physical intelligence.
Excellent point. It seems to me that there is a very good chance that by judging it by our own ab/de/inductive metrics we could miss its blossoming. Your own two cents?
No one does.
Even with AlphaGo the hype was insane, but was mostly caused by people confusing weak AI with strong AI with Arnold Schwarzenegger. Any material that intentionally plays on this confusion is arguably false advertising and fraudulent.
The truth is, it's still impossible for anyone to talk about strong AI, because we have hardly been able to define what it even is. We just know we have it, and it's still unlike anything anyone is working on that has made it to the public. The people who write the papers absolutely know this. It's a common goal, but we have yet to engineer anything that remotely resembles strong AI.
For the most part, we're either still busy figuring out small but hard problems, or hacking resemblance and avoiding the important problems altogether.