The author and his editors deliberately conflate the two terms in the headline, and clarify after about a paragraph. Sure, AlphaGo is not strong AI. And Abraham Lincoln is dead. And there are lots of other things everyone knows that don't deserve to be in the news.
Jean-Christophe Baillie wrote this piece. He has done some work in AI and computer vision, and he had the opportunity here to write a piece that reflected that expertise. Instead he made a rather facile point that is similar to saying "Graffiti isn't art." That argument won't lead anywhere interesting.
He rehashes the Chomsky-ian argument about meaning. That AI's won't understand meaning until they can connect representations to the world. Then he makes the point that this requires embodiment, that they have a physical body in space through which they connect to the world. I don't believe either of these are necessary requirements for AI.
An AI that solves a lot of problems better than humans, and transfers that learning across many problems with relative ease, is getting close to strong AI. It doesn't really matter whether the machine constructs meaning to itself or not. It doesn't have to be humanlike to be strong. Secondly, AIs can train perfectly well in virtual environments. They don't have to be embodied as robots to be considered strong. We can model the world with enough complexity to give AIs problems that, if solved, might define them as strong.
AlphaGo solved one of the hardest challenges which we believed to be very far out in the future before it came along. I'd say the field is on the right -- and very exciting -- track; people just need to learn to approach it in a clear-headed manner.
Good on IEEE though for welcoming opposing viewpoints.
Also the objective was different - they tried to train a network that would play well against any opponent. It should be easier to train a network that exploits some very narrow weakness in specific AlphaGo version.
For example, my phone consistently beats me at chess. But I found a sequence of moves in one opening where is always makes the same mistake, and I after that I can win easily.
1. how many handicaps can AlphaGo give to human players, while still come out winning? the idea of handicap is basically the stronger player lets the weaker player a few "freebie moves" before the game proper starts. For reference, a typical professional player can give 2 to 3 handicaps to the strongest amateurs.
2. Can AlphaGo defeat a group of professional who are also given unlimited time to think and discuss?
If I recall correctly this was part of the commentary during the matches so I can't find a good source for it right now.
It's a response to people who think because of AlphaGo that we are on the cusp of achieving true / strong / general AI.
> An AI that solves a lot of problems better than humans, and transfers that learning across many problems with relative ease, is getting close to strong AI.
We're not close. See Yann LeCun's statement on AI after AlphaGo  and the HN response 
You probably haven't studied much or any AI. In my opinion, those who haven't researched a subject don't understand the details and are likely to make more inaccurate predictions than experts.
In the past 50 years, AI has seen hundreds of small successes in narrow tasks that used to require humans. But none so far has shown the potential to scale up, generalize, and serve tasks other than the narrow one for which it was designed. Like IBM Watson, AlphaGo too is likely to be consigned to the AI scrapheap in the sky.
BUT... the deep net technique used by AlphaGo shows more promise to solve the remaining unsolved AI tasks than any AI method before it. Yes, we still don't know DL's limits, like whether it can integrate one-shot learning, or build and reuse a diverse knowledgebase, or transfer specific methods to solve new more general problems. But as of right now, it's shown greater promise to solve novel weak AI tasks than any past technique I've seen. The author overlooks that potential deliberately and provocatively, and IMO, pointlessly.
Can DL scale up into strong AI too? I think the important thing here isn't that the answer isn't obviously yes (as the author posits), but that the answer isn't obviously no. And in the 50+ year quest for strong AI, that's a first, at least for me.
It would be self-serving if I lied in an interview and said I was qualified for a job making robots if I'd never had any experience doing so.
This is just a short title, and ITT people didn't read the first paragraph.
So it uses a strong claim — i.e. AlphaGo is not AI — then subsequently changes it to a smaller claim — AlphaGo is not AGI.
Reading the entire article does not change the fact that this can be seen as a technique to attract readers. Claim rare X; change X to be a subset of Y; then claim it was really new X all along. Moving the goal post.
With respect, you are being pedantic.
I think that's what the author was thinking when he wrote the title, and I'm giving him the benefit of the doubt.
I'm being flexible, not picky. Disagree with me? Fine, I really don't care.
I could guess that he didn't use a qualifier because there are many different ones and none universally accepted. Or, maybe he wanted to reach a non-tech audience who wouldn't know the difference between strong AI and AI.
There are plenty of good reasons, and reading the article clarifies the author's meaning.
Perhaps we need a new term to refer to "true" AI, preferably one that matches people's general notions of what an AI should be. "Synthetic Thought" maybe?
> Several research labs are now trying to go further into acquiring grammar, gestures, and more complex cultural conventions using this approach, in particular the AI Lab that I founded at Aldebaran, the French robotics company—now part of the SoftBank Group.
This is very interesting. In the human realm, there is the celebrated case history example of Helen Keller, who progressively developed her blindness and deafness over a period of years. She would arguably "pass" whatever tests that were administered for evaluating a strong AI.
However, walking backwards from that, towards case histories where patients were born deaf and blind, they also could pass those kinds of tests. But add more sensory disabilities, and it becomes murkier. Has anyone performed a detailed survey of medical histories? Is there a point beyond which lack of a specific set of senses leads to sapience incapacitation 100% of documented instances in humans? Could that be a promising lead to build a virtual environment that simulates a minimum threshold of senses?
Also… it seems weird to consider having “human” inputs embodied but having direct access to information stored elsewhere as not — surely there’s disk platters, SSDs and other devices existing in the real world, no?
My cursory search says hearing and sight are not necessary. Leaving us with smell, taste, and touch. Smell and taste are closely interlinked, so there is a possibility that a simulated smell-taste and touch are all that is required to "bootstrap" the first AGI, so to speak.
Your point recalls to mind the scene in Battlestar Galactica (the reboot) where Cavil fulminates "I don't want to be human".  I suspect once we boostrap the first AGI, you are absolutely correct, we will rapidly add more senses to the AGI's repertoire. But we only have one data point for a template to build AGI off of, ourselves, so I also suspect the first AGI will somewhat resemble us. If so, and if the embodiment principle is one of the right routes to take, then it makes sense to me to simulate what we empirically determine as the minimum sense set, and then expand from there. Walk before we run, so to speak.
It wouldn't hurt for someone investigating the embodiment principle to build for as many senses as they can envision, though. Many eyes, and all that, for attacking the problem space. I'm just expressing an uninformed gut feeling here, as I am not in the AI research space.
Playing go or chess would have been classified as activities clearly requiring intelligence in the recent past. We're just moving the goalpost whenever AI advances. At some point, it'll probably divide mankind into fractions which either want or don't want to grant certain rights to AI (possibly obsoleted by the great AI uprising of 2025-20-11 6:45:23am to 2025-20-11 6:45:25am)
The 'real world' is not nearly as nicely ordered and specified as a turns based game is. So a computer will be ideally positioned to take advantage of its ability to:
- remember perfectly, forever
- execute algorithms at extreme speed
- in parallel
So you end up with a race that is comparable to a sparrow against a jet airplane. Technology can and does beat biology regularly in all sorts of domains.
Slowly but surely computers are making their way into the real world: their programmers - and by extension the systems - learn how to adapt to the messiness of interacting with life as we know it.
Just like you'd first get kids to play in a playground (the games) and then let them loose in the real world (self driving cars and other much harder tasks).
In the end, all of these amount to a slow but steady march towards human equivalence on a large range of tasks, and excelling humans on plenty of them.
The hard AI question is one where we ask ourselves whether or not there will one of these days be a system that is better than humans on essentially all tasks without exception, including learning, creativity and so on. Once you reach that level all bets are off, until then the slow-but-steady march continues.
The difference is evolution vs revolution, but we've already seen enough progress in limited AI domains that we could conclude we're already seeing a minimal revolution. Whether it will result in a major one is still an open question, it could easily be that progress will at some point plateau.
Like Scott Aaronson said, "You can look at any of these examples -- Deep Blue, the Robbins conjecture, Google -- and say, that's not really AI. That's just massive search, helped along by clever programming. Now, this kind of talk drives AI researchers up a wall. They say: if you told someone in the sixties that in 30 years we'd be able to beat the world grandmaster at chess, and asked if that would count as AI, they'd say, of course it's AI! But now that we know how to do it, now it's no longer AI. Now it's just search."
Not by the normal definition of intuition. It reasons about millions of moves.
"During the match against Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov; compensating by selecting those positions more intelligently, using the policy network, and evaluating them more precisely, using the value network—an approach that is perhaps closer to how humans play. Furthermore, while Deep Blue relied on a handcrafted evaluation function, the neural networks of AlphaGo are trained directly from gameplay purely through general-purpose supervised and reinforcement learning methods"
The other part that increases its strength is the positional evaluation through the value network. It's ALSO intuition based on which points are going to belong to which player.
Those two combined give it a high dan level, maybe around 8d.
Only the last component (Monte-Carlo tree search) is not quite as similar because it uses random game roll-outs to ALSO give an evaluation.
AFAIK, value network evaluation is an overestimate of territories, while the MCTS is an underestimate so using both it's stronger than professional level.
The levels I put is the version that played Lee Sedol. The version that went 50-0 on the Internet is clearly stronger and might already be professional level without MCTS.
He makes even a stronger curious claim: That "This realization is often called the “embodiment problem” and most researchers in AI now agree that intelligence and embodiment are tightly coupled issues."
That makes it sound like it is a consensus between researchers, that strong AI requires a robotic body. Which I doubt.
I'm intrigued by the potential of deep nets to 'simulate' the bases for symbolic grounding using something like thought vectors (TVs), especially if they can be revised and augmented through personal experience. Bootstrapping one's knowledgebase using someone else's rich set of TVs may go a long way for an AI to become grounded vicariously.
Helen Keller had smell, taste, touch, and proprioception all functioning. In what way did she have no senses but a teletype?
It occurred to me later that one can argue that Helen Keller, like the rest of us, was born with a brain shaped by hundreds of millions of years of interacting with the environment. I do not doubt that interaction is probably an essential part of us getting to AI, but I doubt that physical embodiment is the only way to do it.
Alpha Go is an AI project with historical relevance, that had a well specified and measurable goal, which is beating humans in a game that requires to develop an intuition for moves rather than just reading moves in advance.
Having said that I don't understand what's the point of trying to delegitimize the significance of Alpha Go. Implementing what we call "intuition" is not a minor thing.
In contrast, I encountered the Pepper robot in a mall in SF. It was almost impossible to interact with it, so if there is going to be a critique, why not starting from one of his own projects.
Intelligence implies a lot more than pattern recognition. Current state is just a better way to treat large quantity of data. The smart is to feed the computer the right way with good data, iterate through different ways the computer can guess a model from the data and finally interpret the final results. Computers doesn't know how to do that. Yet.
That's exactly how AlphaGo was trained.
Everyone in the field. There are many more people outside of the field though.
> What is AI and what is not AI is, to some extent, a matter of definition.
It is by the general and widely used definition that AlphaGo is actually an AI. It just happens to solve one very specific problem very well, making it a weak AI.
> Again, the rapid advances of deep learning and the recent success of this kind of AI at games like Go are very good news because they could lead to lots of really useful applications in medical research, industry, environmental preservation, and many other areas. But this is only one part of the problem, as I’ve tried to show here. I don’t believe deep learning is the silver bullet that will get us to true AI, in the sense of a machine that is able to learn to live in the world, interact naturally with us, understand deeply the complexity of our emotions and cultural biases, and ultimately help us to make a better world.
The piece was written at a time when the world was much more interested in what AlphaGo was doing and once again people were perhaps getting too excited about AI, but it really doesn't hold much value today. It may have been good to extinguish some of the misguided enthusiasm people were exhibiting back then, but no one is even talking about AlphaGo at the moment and there is little to no value in discussing what it means for strong AI anymore.
* Fuzzy problems -- image, sound and free text recognition. Where there is no real "true answer".
* Problems too hard to solve in a reasonable time without heuristics -- SAT, scheduling, etc. In practice NP-hard problems and further up the complexity heirachy -- AlphaGo goes here.
Once we know how to do something reliably, it stops being AI and just becomes "an algorithm" :)
I don't understand why so many commenters here are so furious about the author's claim that AlphaGo is not AI. What qualifies as AI is really up to definition, and there doesn't seem to be any widely accepted one among AI researchers. The author of the IEEE article doesn't think AlphaGo matches his definition of AI. Other AI researchers may think otherwise. But that doesn't mean the author is trying to dismiss the achievement of the AlphaGo team.
I don't think they really are. I think they are annoyed because nobody (who knows the difference) ever claimed that AlphaGo was strong/true/general AI) - and the article feels like it's swatting at strawmen.
AI is a moving definition and always seems to be "What we can't do right now."
I think the main problem is you end up needing a language to describe the problem, and that ends up limiting the problems that can be solved, or you have to explain so carefully what the problem is it feels like cheating.
We are still quite far from anything we could call "human-style learning" though, but definitely getting there (just look at all the recent publications with reinforcement learning and elaborate ways to use memory in neural nets).
1. dynamically notice world-features ("instrumental goal features") that seem to correlate with terminal reward signals;
2. build+train entirely new contextual sub-models in response, that "notice" features relevant to activating the instrumental-goal feature;
3. shape goal-planning in terms of exploiting sub-model features to activate instrumental goals, rather than attempting to achieve terminal preferences directly. (And maybe also in terms of discovering sense-data that is "surprising" to the N most-useful sub-models.)
In other words, the AI should be able to interact with reward-stimuli at least as well as Pavlov's dog.
Right now, ML research does include the concept of "general game-playing" agents—but AFAIK, these agents are only expected to ever play one game per instance of the agent, with the generality being in how the same algorithm can become good at different games when "born into" different environments.
Humans (most animals, really) can become good at far more than a single game, because biological minds seem to build contextual models, that communicate with—but don't interfere with—the functioning of the terminal-preference-trained model.
So: is anyone trying to build an AI that can 1. learn that treats are tasty, and then 2. learn to play an unlimited number of games for treats, at least as well as a not-especially-smart dog?
When people give credit to the human designers for AlphaGo's wins, that it is really a win for humanity, I disagree. The wins are alphago's even if the design is of human ingenuity.
When You say that the outputs of human ingenuity should be credited to Evolution, I similarly disagree. You might as well credit evolution for AlphaGo's win. While it is true that Evolution invented the first AGI (and in some though not all ways, a superior intelligence to it), it still makes sense to separate the products of human learning from whatever structural priors DNA passed along. I'll also point out that compared to most animals, humans actually have weaker priors and spend a lot of their early days learning to learn.
I think it's fascinating that we have developed from un-self-reflective animals, to abstract thinkers on the verge of creating wholly new abstract-thinking entities from scratch, in only two and half thousand steps. Especially given that the majority of the technical knowledge necessary was developed only in the last 500 years, or 25 generations.
Only for a little while. AI is how we describe new technology that does something we think of as requiring human intelligence. AI, like the word "technology" itself, is a term we use during the lag between creating something and learning to take it for granted.
I don't foresee an end to technological development, but I do think eventually there will be no AI, because our ability to intuitively appreciate intelligence doesn't much surpass our own. Beyond some point we will stop perceiving new technological capabilities as higher levels of intelligence. There will be people, computers, appliances, services not tied to individual machines, and genetically engineered rabbit tutors for our children, and nobody will be surprised to encounter intelligence anywhere, least of all in a computer.
Which since consciousness is likely an emerging property of our own biological neural net and training data/weights, would likely have similar issues that humans have when exposed to bad data while growing up.
Well that shoots down my "minds are made of mechanical internal drives/pistons/levers" theory.
That's what I would define as weak AI. So yes, even algorithms which do not adapt their behaviors by learning, but can imitate intelligent human behavior to some extent would be weak AI, such as AI in video games.
Now strong AI would be: "the capability of a machine to originate intelligent human behavior"
This means the intelligent behavior would originate from the machine itself. Some Machine Learning algorithms exhibit strong AI, in that they find ways to even outperform a human, by themselves, through the learning process. Sometimes, we can't even understand why it chose to do what it did, but it appeared to be even more intelligent than what most human would have chosen to do themselves.
Now Artificial General Intelligence, or sometimes called strong AI also, but I think there needs to be a distinction as I explain before. This would be: "the capability of a machine to originate any intelligent human behavior"
The reason a lot of people are talking about true AI, AGI, strong AI, etc. today, is because Machine Learning is seen as the be all, end all of AI. Yet practitioners and scientists know that it is not, at least in its current form. Currently, it can solve simple decision tasks, probably the kind of decisions a person makes in under a second or two, like driving. But it does not solve them generally, so a person must train different models for each tasks.
At the same time, having a way to automate decision making of a single simple task is huge in terms of potential. There's so many things we can automate through this, and if we combine them, you can create even more complex autonomous machines. That's why people are excited. Though I understand that it is not as awesome as having found general AI, or AI which can make complex decisions. The dream of AI continues, but the practicality of it are closer then ever.
Replacing various human thought capabilities has been the goal of computers and algorithms since Turing and von Neumann.
> in that they find ways to even outperform a human, by themselves, through the learning process
Computers have been outperforming humans in many tasks since the moment of their creation. Some animals learn some things better than humans. Are they more intelligent?
> Machine Learning is seen as the be all, end all of AI.
I think it is seen as that mostly by non-experts, and generally by those who just want to believe we're close.
> At the same time, having a way to automate decision making of a single simple task is huge in terms of potential. There's so many things we can automate through this, and if we combine them, you can create even more complex autonomous machines.
Sure, I just don't see what any of this has to do with intelligence. Bacteria are autonomous, and so is the weather. The universe, too, for that matter. None of those is particularly intelligent.
AlphaGo was trained for blackgammon, could it be given chess and learn it just as well? The answer is no.
If I had to personally define AI, I would give a machine input to limbs of robot, and output of cameras and microphones (eyes and ears). And I would see if given sufficient years, can it learn language and conversation and ability to use robotic body for sports.
The point the article makes is that a general intelligence (or however you wanna call something that has intelligent function like a human) is not attained by AlphaGo. It might help but it is only part of the puzzle. There is work needed in other directions.
 S. Legg, M. Hutter, Universal Intelligence: A Definition of Machine Intelligence. https://arxiv.org/abs/0712.3329
Consider AI as a newly discovered species. If you were trying to discern if a previously unknown cetacean were intelligent, or if life discovered on a distant planet were intelligent, would you only say "intelligence discovered!" after it equaled-or-surpassed human performance on many or most kinds of thinking historically valued by humans? I wouldn't. I think that AI is already here and that the people waiting for artificial general intelligence will keep raising the bar and shifting the goalposts long after "boring" narrow AI has economically out-competed half the human population.
Most people will not be impressed by a machine that can master backgammon, chess and poker, despite it being a great technical feat. They would be impressed by one that can successfully teach a 5th grade math class, even though there are hundreds of thousands of people who can do this.
This would require more than teaching the kids math, but also how to deal with the kid who loses a parent during the school year. How to deal with bullying in the class, with misbehaving students. None of it is "specialized knowledge" like playing GO. And we are nowhere even remotely close to this.
There has been an uptick in people pondering the economic implications of driverless vehicles and a more robotic future. That discussion seems kind of oddly isolated from re-considering the nature of intelligence, human and otherwise. It's as if after the Industrial Revolution people kept narrowly scoping the meaning of "power" to "muscle power" rather than acknowledging mechanical forms. Oh, yes, that coal fired pump can remove water from the mine faster than I can... but it just uses clever tricks for faking power.
Woah wait what? Non-human animals successfully navigate >7 miles of mountain terrain all the time.
Machine intelligence doesn't seem like "real" intelligence because it just doesn't seem as generalizable. Taking the engines and hydraulics used to great effect in water pumps and applying them to construction cranes required engineering work, sure, but no new physics. But you can't just take the convolutional neural nets that are breaking new ground in computer vision and apply them to natural language processing, you need new computer science research to develop long short-term memory networks.
The cool thing about AlphaGo, from my understanding, was that it was able to train the deep learning-based heuristics for board evaluation by playing a ton of games against itself. This is especially awesome because those heuristics are (were?) our main edge over machines . But in CV and NLP, playing against yourself isn't really a thing, so again, this work doesn't automatically generalize the way engines and hydraulics did.
From the other end, the merits of production AI will be measured against isolated properties pulled out of the intellgence black-box, and since intelligence remains undefined (in a strict ontological sense), debate ensues.
I believe cognitive scientists have a considerably better idea than that.
When I said "make a big fuss" i didn't mean "herald it as the gold standard", but they do go on about it even though they might disagree about what it utlimately signifies.
A tardigrade achieves its goals in a wide range of environments; I don't think anyone would call it intelligent. I've known quite a few 15-year-old juvenile delinquents that I'm certain were able to achieve their goals in a much wider range of environments than Albert Einstein; were they all much more intelligent than Einstein?
> But it is a great step towards AI.
How do you know how big a step it is? None of our learning algorithms has yet to achieve even an insect's level of intelligence (which would be normally considered zero or close to it). How do you know we're even on the right path? I mean, I have no real reason to doubt that we'll get AI sometime in the future, but the belief -- certainty even -- that AI is imminent has been with us for about sixty years now.
Also, if intelligence means "achieving goals in a wide range of environments", then I think some computer viruses are far more intelligent -- by this definition -- than even the most advanced machine-learning software to date.
To use a flawed but useful mode of thinking: we've succeeded at codifying crystallized intelligent systems (far beyond human means), we've made some progress on fluid intelligent systems (at worst random search), but we have not managed to make any progress on the mediator of the two. We can build a million different single purpose neural networks, but we have so few meta-classifiers for knowing which network to use at which time (and can also change track their own self-modification). Watson is possibly the closest we have at this time, but it seems to work on static information which it cannot update or improve on itself.
You may think it doesn't matter, but we've candidates to the presidential elections here in france building economical programs assuming "robots will do everything soon, so let's provide everybody with a universal revenue" as a likely immediate future.
It is extremely healthy to let people understand all we've been able to build so far are number crunching machines, which absolutely don't give any kind of meaning to the numbers they see passing by. Meaning as : i see what this number represents, and the consequences it has in real people's context.
All humans are just protein factories, which don't have any meaning other than electrical pulses and chemical reactions.
For sure there's no clear, agreed, universal meaning, and most likely it does not have a meaning to you. Not everyone thinks like that, though.
edit: [OT] "have a meaning to you" or "for you"? I have the impression that there's a slightly different interpretation, and one might be offensive (not intended). Or are they the same?
If the machine comes up with the decision making logic, it's AI, if it does not, it's not AI.
A sorting algorithm, in most cases, is a description of the logic to decide how to proceed at each step to accomplish the task of sorting. The computer merely executes that logic for us very very quickly.
On the other hand, self driving machine learning algorithms come up with the logic of how to base the decision of when to steer left/right, to what degree, when to accelerate and decelerate, to what degree, all by themselves. Often time, we can't even formalize the logic they use, we can only test to see how well it performs. This is AI. The computer originated intelligence, it did not simply execute it.
Within that, there is a spectrum which even human exhibit. I cannot come up with logic to solve all tasks. For most tasks, I would need a good teacher and a lot of practice before I can find on my own logic that can solve a task, even so slightly, depending on the difficulty of the task at hand.
I'd say there's only one exception to this rule. Sometimes it is useful to call something which appears intelligent AI. Like in most video games, the computer is actually simply executing the logic a programmer told it about, but to most players, you will believe as if it is coming up with it on its own. Most often, it does not and did not, a programmer came up with it, but you're fooled into thinking otherwise, and the computer appears sentient. This is not true intelligence, but being able to use logic to make decision, even if that logic did not originate from you, is probably a part of intelligence. In a sense, if I could program a machine to perform all tasks, even though it uses the logic that I came up with, that machine would appear quite useful. It would only fail when encountering a task it has never seen, or a condition in a task that's not accounted for in my logic. Machine learning would not fail at those, in that, it could re-learn from the new condition, and come up with a different way to solve the task.
> Sometimes it seems as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.
To me, saying algorithms are on the intelligence spectrum is like saying the weather report is on the humour spectrum. It's true but ultimately meaningless, because neither algorithms nor the weather report exhibit the interesting properties of those spectrums.
Atkin often complained about the characterization of their program being AI, saying (paraphrasing here) "It isn't AI. It is just good software engineering."
I look at artificial intelligence as an umbrella term which has varying meaning depending on context. If you classify a task as needing intelligence to solve and someone finds a piece of code to do it, you have an artificial intelligence there. You can keep moving the goal post but it's only nuances and definition.
I think that creating similar test as milestones would do more to progress the AI field. Like the Turning Test, it must test that parity with another intelligent being is achieved. Intelligence is a spectrum from "not very" to "omniscient". What are some organisms that are closer to the "not very" end of this spectrum that would be good candidates for parity tests? How would a parity test be performed?
It is in answering that question that one arrives at “There is no AI without robotics”. If you want to demonstrate that your artificial cockroach is as intelligent as the real thing, then you have to build a robot. It doesn't have to be a physical robot as long as you have a simulator that can do a very accurate simulations of the physics of the environment, the AI candidate, and their interactions. That's not so easy, which is why many if not most researchers build physical robots.
Would a cockroach be a good first milestone? Too complex? How about a worm? Achieved parity with cockroach? Then move on to Jumping Spiders. When your great-grandchildren achieve that parity, my great-grandchildren will certainly celebrate with them.
1 Regression - Given a set of vector pairs Xi and Yi, find a function that maps each Xi to Yi minimizing some objective function.
2 Path Finding - Given some topology (typically on some regression output) find the "best" path, given some objective function.
All other problems (classification, clustering, etc) can be reposed as a combination of these two.
Viewed in this way, an "AI" system set on any reasonable test of intelligence would score super-human (defined against the average adult human) with the current set of algos, with one caveat.
If we think of operating systems in say the 90s we would mostly agree that few innovations (evolutions as opposed to revolutions) have occurred when compared to the OSs we have today. It's the development, integration, testing, tools, hardware support, etc that have taken all this time.
So, to the article's point, the AlphaGo system does these 2 things pretty well just as Windows95 did things pretty well. Yet even today I have to restart my computer every time there is the most trivial of updates.
TL;DR Super human Audio/Visual/NLP ML == AI is both here now and a long way off.
One could probably cut this to one but stating it would be very convoluted. Also "pathing" problems are typically non-convex and real-time whereas regression problems are typically not.
Things that a decade ago would have been described as performing regression analysis, and just last year doing machine learning, may be described this year as being powered by AI.
A fun alternative strand in AI research driven by these thoughts (I think) is typified by Rosie. Rosie is the proper job.
There is an unseemly comment in this thread that alludes to a genuine issue with the embodied AI arguement. People with other bodies are not lesser intelligences than those who have a socially cast "typical" body or sensors. And given that one has to question this line of reasoning. The answer is that intelligence is embodied outside and beyond your physical being - it's your body, your family, your tribe and so on. I find the philosophy difficult, but to prove that there is something in this view you need to hike out to a wood somewhere and spend a couple of weeks by yourself. Things happen and who you are changes (did for me) cut yourself off or be cut off and you are as less human (autonomous cognitive agent) as one can become.
Deep learning has been personally career/interests changing, but I still think that we should be building hybrid neural network / symbolic AI systems.
The phenomenon of people discounting significant advances as "not AI" is not new. I'm surprised it hasn't gotten old yet.
It hasn't gotten old because people keep calling non-AI AI. I have so far refrained from correcting people because I see it like the word "hacker", which on Hacker News is widely understood in its correct meaning, but is commonly used to refer to computer criminals. I've given up long ago on correcting people to use the word "crackers" rather than "hackers". But since it has been brought up...
Artificial Intelligence refers to something that behaves intelligently and can adapt to new situations. I don't think human brains are magic or divine (I was surprised to learn this is not a universal opinion), so in that sense we're all "just a computation". In that sense the simplest pocket calculator is a thinking machine since it can take an almost infinite number of inputs and respond to it logically. But everyone with (artificial) intelligence understands that this is not what is meant by artificial intelligence.
The article that you link claims that every major advance is referred to as "just a computation", but I doubt that people will claim AI is "just a computation" once a computer learns to speak and can have a real conversation at the level of a 3 year old or something.
It's obviously speculative, but I'm of the belief that AGI will never exist the way fanciful minds imagine it. Rather AI will evolve slowly, and systems will incorporate small advances one by one until one day we don't really care to distinguish between AI and AGI. It will literally just be a collection of capabilities ("just a computation") that are well understood and have lost all semblance of magic.
You'd be surprised. Here's part of just such a conversation for a couple of years ago; I've always thought "hacker" without qualifiers meant "person who can code"
Other terms commonly used here with wildly different meanings being applied include "liberal", "feminism" and "capitalism". Screaming matches abound... :)
Discounting AI advances is nothing new, but upselling them is at least as old, as well.
My own way to look at AI winters is as triumph of Scruffy pragmatism over Neat delusions. Intelligence is unimaginably complicated, there is no reason to believe that silver bullet algorithms even exist, let alone the probability that the latest fad is it.
> Views of AI fall into four categories:
> Thinking humanly - Cognitive Modeling
> Thinking rationally - “Laws of Thought”
> Acting humanly - Turing Test
> Acting rationally - Rational Agent
> The textbook advocates "acting rationally"
Indeed most of the things we see today in AI community is solving the problem of "acting rationally" whereas the other categories have their respective fields (Cognitive Science, Cognitive Neuroscience, Philosophy) that are no longer closely related to computer science.
People have been saying "___ is not AI" since Claude Shannon built a robot mouse to solve a maze. Then Shannon's chess playing program was "not AI". Humbug.
AI researchers say this because they don't want the hype to lead investors putting money in the wrong places, which could cause another AI winter.
OpenAI is one example. Fortunately, that's a non-profit. If it were profit seeking and they had a large investment that went belly up, it might cause the industry to tank.
That bust will probably happen again in the future. The "scrooges" are just trying to forestall it.
I liked the article on many fronts and thought it tried to justify most of its claims. However, the quoted claim goes unsupported.
Does anyone know of justifications for the quoted claim?
One can turn it around: if an organism has no means of assessing or communicating with the outside world bar the ability to send and receive electrical signals so simple they can be mapped to the game of Go and whose survival to the next generation depends on electrical signals combining in various rare patterns hardcoded into its DNA (which could be mapped to the win conditions for the game of Go), would we even consider the possibility of such an organism being intelligent simply because its route to achieving those patterns was incomprehensible to and could not be deactivated by counter-stimuli from the humans that studied it? Would such a theoretical organism even need to be composed of more than one cell?
Actual argument: "Is [AlphaGo] going to get us to full AI, in the sense of an artificial general intelligence, or AGI, machine? Not quite"
Conclusion: "the rapid advances of deep learning and the recent success of this kind of AI at games like Go"
So not only the article discusses something different from the headline, it directly contradicts the headline by calling AlphaGo a type of AI.
And the questioning article is not real writing. It fails to the standards of "real writing" that I uphold. A real article doesn't come with a clickbait title, and is better documented. I don't think the author knows what he's talking about - instead, just expressing incredulity based off his gut feelings.
- taylor swift is not really an artist
- paulo coehlo is not really a writer
- Trump is not really human
The topic sounds interesting, but the presentation is incredibly off-putting.