Hacker News new | comments | show | ask | jobs | submit login
AlphaGo Is Not AI (ieee.org)
181 points by henrik_w 101 days ago | hide | past | web | 166 comments | favorite



The headline for this piece is clickbait. AlphaGo is AI, it's just not strong AI. No AI is strong AI for the moment, but that doesn't mean that the AI we have is uninteresting or a dead end, as the author seems to think.

The author and his editors deliberately conflate the two terms in the headline, and clarify after about a paragraph. Sure, AlphaGo is not strong AI. And Abraham Lincoln is dead. And there are lots of other things everyone knows that don't deserve to be in the news.

Jean-Christophe Baillie wrote this piece. He has done some work in AI and computer vision, and he had the opportunity here to write a piece that reflected that expertise. Instead he made a rather facile point that is similar to saying "Graffiti isn't art." That argument won't lead anywhere interesting.

He rehashes the Chomsky-ian argument about meaning. That AI's won't understand meaning until they can connect representations to the world. Then he makes the point that this requires embodiment, that they have a physical body in space through which they connect to the world. I don't believe either of these are necessary requirements for AI.

An AI that solves a lot of problems better than humans, and transfers that learning across many problems with relative ease, is getting close to strong AI. It doesn't really matter whether the machine constructs meaning to itself or not. It doesn't have to be humanlike to be strong. Secondly, AIs can train perfectly well in virtual environments. They don't have to be embodied as robots to be considered strong. We can model the world with enough complexity to give AIs problems that, if solved, might define them as strong.


Totally agree, and very happy to see this statement at the top of HN. I'm against hyping up AI or misinterpreting it just as much as the next guy (especially coming from a position where dealing with a lot of higher-management people has made me very wary of articles and vision-pieces from the other end of the spectrum that are all like "invest now in AI or lose it all"), but this is just clickbait for the sake of enjoying playing devil's advocate.

AlphaGo solved one of the hardest challenges which we believed to be very far out in the future before it came along. I'd say the field is on the right -- and very exciting -- track; people just need to learn to approach it in a clear-headed manner.

Good on IEEE though for welcoming opposing viewpoints.


I wouldn't say the challenge has been "solved" - the system may have only been able to win consistently because the programmers made adjustments and improvements to it between each game. Fitting its heuristics to a specific opponent might actually be overfitting unless it can consistently win against all challengers without being modified between rounds. [edited to clarify uncertainty about "why" it was able to win]


AlphaGo played plenty of games online after the Lee Seedol match. It went 60-0 against some of the worlds best players. Granted they were fast games but that's a challenge for AlphaGo as well as its adversaries.


Thank you for this information, it will definitely be interesting to see how the future of Go plays out. For example, will players be able to train with AlphaGo to the point of being able to beat it?


Humans will never defeat the strongest go playing programs ever again. AlphaGo is now clearly ahead of the top pros, and such programs will only get stronger. Humans should learn some things from the machines, but we will definitely not be able to extract the full totality of understanding, and thus will naturally continue to fall behind.


Even more interesting to me is the question of if Go players will be able to apply the novel strategies of AlphaGo which break with decades of conventional wisdom or if there's something in their execution that requires such perfect play (but leads to a more certain outcome) that they're only feasibly played by AlphaGo.


That's the sort of thing I have wondered about. Chess continues to evolve since the famous Kasparov vs. Deep Blue matches, and from a quick look around human players do continue to draw matches against the strongest computers (even though they are regularly beaten, as well).


I'm not sure about human players, but it might be possible to build an adversarial neural network that beats AlphaGo (and only AlphaGo).


I thought AlphaGo is trained in an adversarial manner already (it plays against itself).


The version that played against Fan Hui was reportedly trained that way, but if I recall correctly the more recent one that played Lee Sedol did't rely on self-play as much.

Also the objective was different - they tried to train a network that would play well against any opponent. It should be easier to train a network that exploits some very narrow weakness in specific AlphaGo version.

For example, my phone consistently beats me at chess. But I found a sequence of moves in one opening where is always makes the same mistake, and I after that I can win easily.


The one game that Lee managed to defeat AlphaGo was probably the last time any human player was able to beat the strongest Go program at the time. That said, there are a few interesting ideas to try, and I (being a reasonably strong amateur go enthusiast myself) would love to see Google try out:

1. how many handicaps can AlphaGo give to human players, while still come out winning? the idea of handicap is basically the stronger player lets the weaker player a few "freebie moves" before the game proper starts. For reference, a typical professional player can give 2 to 3 handicaps to the strongest amateurs.

2. Can AlphaGo defeat a group of professional who are also given unlimited time to think and discuss?


Making a group of professionals with unlimited time to arrive to a common conclusion may be a harder challenge than AI. I would love to follow those discussions, but I am not sure they would ever end.


I believe the AlphaGo team estimated it to be about 2 stones stronger than top pros.


I believe that there was a sort of "learning freeze" for AlphaGo during the Lee Sedol games and there was no manual modification of its strategy either.

If I recall correctly this was part of the commentary during the matches so I can't find a good source for it right now.



Thank you, that's exactly what I was looking for!


Thanks for bringing this up. I was recalling more recent articles that did state there were modifications each night between matches. Perhaps those were in error; I would not have commented in that case. It's a shame nobody pointed this out sooner, I could have removed the misinformation.


It's not clickbait. You're being pedantic. It's clear the author meant true / general / strong AI and was merely addressing the hype of some non-AI researchers.

It's a response to people who think because of AlphaGo that we are on the cusp of achieving true / strong / general AI.

> An AI that solves a lot of problems better than humans, and transfers that learning across many problems with relative ease, is getting close to strong AI.

We're not close. See Yann LeCun's statement on AI after AlphaGo [1] and the HN response [2]

[1] https://www.facebook.com/yann.lecun/posts/10153426023477143

[2] https://news.ycombinator.com/item?id=11280744


I have always found the idea that there's a hard line between strong and not-strong AI extremely naive. The whole thing is a long gradient of progress. Strong AI is going to creep up on most programmers like the proverbial boiling frog, and internet forums will be full of people denying its existence throughout that process.


> I have always found the idea that there's a hard line between strong and not-strong AI extremely naive

You probably haven't studied much or any AI. In my opinion, those who haven't researched a subject don't understand the details and are likely to make more inaccurate predictions than experts.


What a randomly self-serving assumption to make.


I can't agree. If we can blur the difference between weak and strong AI, that 'transitional form' (missing link?) will be enormously important to the future of AI (and mankind).

In the past 50 years, AI has seen hundreds of small successes in narrow tasks that used to require humans. But none so far has shown the potential to scale up, generalize, and serve tasks other than the narrow one for which it was designed. Like IBM Watson, AlphaGo too is likely to be consigned to the AI scrapheap in the sky.

BUT... the deep net technique used by AlphaGo shows more promise to solve the remaining unsolved AI tasks than any AI method before it. Yes, we still don't know DL's limits, like whether it can integrate one-shot learning, or build and reuse a diverse knowledgebase, or transfer specific methods to solve new more general problems. But as of right now, it's shown greater promise to solve novel weak AI tasks than any past technique I've seen. The author overlooks that potential deliberately and provocatively, and IMO, pointlessly.

Can DL scale up into strong AI too? I think the important thing here isn't that the answer isn't obviously yes (as the author posits), but that the answer isn't obviously no. And in the 50+ year quest for strong AI, that's a first, at least for me.


It's not self-serving to say PhDs are better than me at predicting the future of their field.

It would be self-serving if I lied in an interview and said I was qualified for a job making robots if I'd never had any experience doing so.


It is easy to see if a title is clickbait or not. Just change it with some stupidly long title, such as "AlphaGo does not solve the problem of meaning, so it is not AGI" and ask yourself if this title would have generated a similar number of clicks. The title is as clickbait as it can get.


No, clickbait is "10 ways to find your spouse"

This is just a short title, and ITT people didn't read the first paragraph.


If you read the first paragraph, it mentions first AI then AGI, and does not deny that the latter is a subset of the former. In the second paragraph it even mentions that today’s AI is not AGI.

So it uses a strong claim — i.e. AlphaGo is not AI — then subsequently changes it to a smaller claim — AlphaGo is not AGI.

Reading the entire article does not change the fact that this can be seen as a technique to attract readers. Claim rare X; change X to be a subset of Y; then claim it was really new X all along. Moving the goal post.


Poppycock. Anyone who understands software knows AlphaGo used AI tech and that the title is referring to general AI. Anyone who doesn't will have their understanding clarified in the first paragraph.


There's a clear and meaningful difference between "not AI" and "not strong AI".

With respect, you are being pedantic.


look, if i walk up to somebody and say alphago is not AI, they'll understand what I'm saying, because the layperson doesn't distinguish AI and strong AI. Only techies do.

I think that's what the author was thinking when he wrote the title, and I'm giving him the benefit of the doubt.

I'm being flexible, not picky. Disagree with me? Fine, I really don't care.


Then why not title the article "Alphago is not strong AI"?


Read the article. The title isn't a substitute for reading. It's short because your attention span is short.


You didn't answer my question. There's an advantage for using the long title "Alphago is not strong AI": less people will be angry at a perceived clickbait title. What's the advantage for using the short title "Alphago is not AI"?


You're asking me why the author chose this title? I am not the author.

I could guess that he didn't use a qualifier because there are many different ones and none universally accepted. Or, maybe he wanted to reach a non-tech audience who wouldn't know the difference between strong AI and AI.

There are plenty of good reasons, and reading the article clarifies the author's meaning.


If that were true, the article is useless – because nobody ever said that AlphaGo was 'strong' AI.


> It's clear the author meant true / general / strong AI

Perhaps we need a new term to refer to "true" AI, preferably one that matches people's general notions of what an AI should be. "Synthetic Thought" maybe?


People just need to read the article. The author describes what he meant in the first paragraph.


RE:the embodiment/robotics argument, notice that he has a vested interest in this field:

> Several research labs are now trying to go further into acquiring grammar, gestures, and more complex cultural conventions using this approach, in particular the AI Lab that I founded at Aldebaran, the French robotics company—now part of the SoftBank Group.


> ...requires embodiment, that they have a physical body in space through which they connect to the world. I don't believe either of these are necessary requirements for AI.

This is very interesting. In the human realm, there is the celebrated case history example of Helen Keller, who progressively developed her blindness and deafness over a period of years. She would arguably "pass" whatever tests that were administered for evaluating a strong AI.

However, walking backwards from that, towards case histories where patients were born deaf and blind, they also could pass those kinds of tests. But add more sensory disabilities, and it becomes murkier. Has anyone performed a detailed survey of medical histories? Is there a point beyond which lack of a specific set of senses leads to sapience incapacitation 100% of documented instances in humans? Could that be a promising lead to build a virtual environment that simulates a minimum threshold of senses?


Also, AIs are not constrained by senses shared with humans. Our information passing is mediated by few inputs — a stream of light as a matrix within a very narrow range of possible spectra; a stream of air pressure decoded into frequency series; touch etc. AIs could have many more modes of input and output.

Also… it seems weird to consider having “human” inputs embodied but having direct access to information stored elsewhere as not — surely there’s disk platters, SSDs and other devices existing in the real world, no?


Absolutely correct, more senses are definitely available to an AGI. In the context of building an AGI that possesses "like a human" implied or directly in the evaluation criteria, and the thesis that embodiment of senses is a prerequisite to fulfilling that criteria, we don't know whether or not more senses than human might thwart the development of the first AGI. We currently believe that more senses than "normal" tends to lead to neuro-atypical development like autism, so there is some evidence for caution against tossing in all the inputs we can imagine, based upon current thinking (I wonder if that thinking is based upon empirical fMRI scans). We do have the corpus of data built up by humans however, that says "no known cases in medical history of sapience failing to develop with this minimal set of senses".

My cursory search says hearing and sight are not necessary. Leaving us with smell, taste, and touch. Smell and taste are closely interlinked, so there is a possibility that a simulated smell-taste and touch are all that is required to "bootstrap" the first AGI, so to speak.

Your point recalls to mind the scene in Battlestar Galactica (the reboot) where Cavil fulminates "I don't want to be human". [1] I suspect once we boostrap the first AGI, you are absolutely correct, we will rapidly add more senses to the AGI's repertoire. But we only have one data point for a template to build AGI off of, ourselves, so I also suspect the first AGI will somewhat resemble us. If so, and if the embodiment principle is one of the right routes to take, then it makes sense to me to simulate what we empirically determine as the minimum sense set, and then expand from there. Walk before we run, so to speak.

It wouldn't hurt for someone investigating the embodiment principle to build for as many senses as they can envision, though. Many eyes, and all that, for attacking the problem space. I'm just expressing an uninformed gut feeling here, as I am not in the AI research space.

[1] https://www.youtube.com/watch?v=pM3CptVZDYU


Is AlphaGo really what we want to classify as 'intelligence'? It is a calculating engine – one which outperforms humans, to be sure, but no more intelligent than a properly-wielded pocket calculator.


Yes – stuff like this is confusing 'intelligence' with 'sentience'.

Playing go or chess would have been classified as activities clearly requiring intelligence in the recent past. We're just moving the goalpost whenever AI advances. At some point, it'll probably divide mankind into fractions which either want or don't want to grant certain rights to AI (possibly obsoleted by the great AI uprising of 2025-20-11 6:45:23am to 2025-20-11 6:45:25am)


I don't think it is quite that simple.

The 'real world' is not nearly as nicely ordered and specified as a turns based game is. So a computer will be ideally positioned to take advantage of its ability to:

- remember perfectly, forever

- execute algorithms at extreme speed

- in parallel

So you end up with a race that is comparable to a sparrow against a jet airplane. Technology can and does beat biology regularly in all sorts of domains.

Slowly but surely computers are making their way into the real world: their programmers - and by extension the systems - learn how to adapt to the messiness of interacting with life as we know it.

Just like you'd first get kids to play in a playground (the games) and then let them loose in the real world (self driving cars and other much harder tasks).

In the end, all of these amount to a slow but steady march towards human equivalence on a large range of tasks, and excelling humans on plenty of them.

The hard AI question is one where we ask ourselves whether or not there will one of these days be a system that is better than humans on essentially all tasks without exception, including learning, creativity and so on. Once you reach that level all bets are off, until then the slow-but-steady march continues.

The difference is evolution vs revolution, but we've already seen enough progress in limited AI domains that we could conclude we're already seeing a minimal revolution. Whether it will result in a major one is still an open question, it could easily be that progress will at some point plateau.


It uses intuition to play, which is how humans play, but it does it by considering way more positions that humans.


Not only does it use a form of intuition, but it's an intuition that it's developed itself, rather than just copying others. Professional Go players are learning from the creative strategies of AlphaGo and if that doesn't imply some level of weak intelligence then I'm not sure why anyone talks about a weak/strong AI divide to begin with.

Like Scott Aaronson said, "You can look at any of these examples -- Deep Blue, the Robbins conjecture, Google -- and say, that's not really AI. That's just massive search, helped along by clever programming. Now, this kind of talk drives AI researchers up a wall. They say: if you told someone in the sixties that in 30 years we'd be able to beat the world grandmaster at chess, and asked if that would count as AI, they'd say, of course it's AI! But now that we know how to do it, now it's no longer AI. Now it's just search."


>It uses intuition to play,

Not by the normal definition of intuition. It reasons about millions of moves.


Could you provide a citation for that figure? It's not one I've read but I don't necessarily disbelieve it. It is worth considering this perspective from one of the Deepmind Nature papers:

"During the match against Fan Hui, AlphaGo evaluated thousands of times fewer positions than Deep Blue did in its chess match against Kasparov; compensating by selecting those positions more intelligently, using the policy network, and evaluating them more precisely, using the value network—an approach that is perhaps closer to how humans play. Furthermore, while Deep Blue relied on a handcrafted evaluation function, the neural networks of AlphaGo are trained directly from gameplay purely through general-purpose supervised and reinforcement learning methods"


The policy network (intuition) is already high dan level, which is stronger than many strong amateurs. That's right, with zero reading it's already something like 5d level.

The other part that increases its strength is the positional evaluation through the value network. It's ALSO intuition based on which points are going to belong to which player.

Those two combined give it a high dan level, maybe around 8d.

Only the last component (Monte-Carlo tree search) is not quite as similar because it uses random game roll-outs to ALSO give an evaluation.

AFAIK, value network evaluation is an overestimate of territories, while the MCTS is an underestimate so using both it's stronger than professional level.

The levels I put is the version that played Lee Sedol. The version that went 50-0 on the Internet is clearly stronger and might already be professional level without MCTS.


Those are two independent sentences which are not mutually exclusive.


Your intuition is probably based on millions of events happened to you and your ancestor (parts that are developed through evolution and social construct).


Equivalent on the side would be million of games in the past either simulation or real human ones, both of which such systems use


> Then he makes the point that this requires embodiment, that they have a physical body in space

He makes even a stronger curious claim: That "This realization is often called the “embodiment problem” and most researchers in AI now agree that intelligence and embodiment are tightly coupled issues."

That makes it sound like it is a consensus between researchers, that strong AI requires a robotic body. Which I doubt.


So do I. But I'm mindful of the flawed assumptions made by the best minds in symbolic AI from the 1950's to the 1980's that AI required no more than raw facts and logic. Let's not repeat that mistake.

I'm intrigued by the potential of deep nets to 'simulate' the bases for symbolic grounding using something like thought vectors (TVs), especially if they can be revised and augmented through personal experience. Bootstrapping one's knowledgebase using someone else's rich set of TVs may go a long way for an AI to become grounded vicariously.


The first Chomskyan argument looks plausible (though still speculative) to me, but I share your skepticism of the ‘embodiment problem’ claim that a body and rich set of senses are necessary prerequisites. Anyone seduced by that argument should consider the life story of Helen Keller, who did have two-way interaction with the world, but only through the biological equivalent of a teletype.


>I share your skepticism of the ‘embodiment problem’ claim that a body and rich set of senses are necessary prerequisites. Anyone seduced by that argument should consider the life story of Helen Keller, who did have two-way interaction with the world, but only through the biological equivalent of a teletype.

Helen Keller had smell, taste, touch, and proprioception all functioning. In what way did she have no senses but a teletype?


I was just looking for an analogy suggesting low bandwidth and experientially lean.


I mean, yes, it's low bandwidth, but it's actually high dimensionality, which has information-theoretic advantages.


Low-bandwidth is something of an understatement, especially when we consider activities like language acquisition, where smell, taste and proprioception play little part. Are the advantages of the dimensionality of this case sufficient to rescue the author's assertion that the embodiment problem requires robotics for its solution? Note that he used vision in his examples.

It occurred to me later that one can argue that Helen Keller, like the rest of us, was born with a brain shaped by hundreds of millions of years of interacting with the environment. I do not doubt that interaction is probably an essential part of us getting to AI, but I doubt that physical embodiment is the only way to do it.


That and virtual environments are continuing to grow in richness.


Very disappointing article indeed.

Alpha Go is an AI project with historical relevance, that had a well specified and measurable goal, which is beating humans in a game that requires to develop an intuition for moves rather than just reading moves in advance.

Having said that I don't understand what's the point of trying to delegitimize the significance of Alpha Go. Implementing what we call "intuition" is not a minor thing.

In contrast, I encountered the Pepper robot in a mall in SF. It was almost impossible to interact with it, so if there is going to be a critique, why not starting from one of his own projects.


I think we can agree to disagree on the definition of intelligence.

Intelligence implies a lot more than pattern recognition. Current state is just a better way to treat large quantity of data. The smart is to feed the computer the right way with good data, iterate through different ways the computer can guess a model from the data and finally interpret the final results. Computers doesn't know how to do that. Yet.


> The smart is to feed the computer the right way with good data, iterate through different ways the computer can guess a model from the data and finally interpret the final results.

That's exactly how AlphaGo was trained.


Not very up to date are you...


So, basically the Chinese Room brouhaha all over again...


Just wait, next they'll discover the trolley problem and claim that self-driving vehicles will have to be philosophers.




> things everyone knows

Everyone in the field. There are many more people outside of the field though.


The piece really makes the only point worth making in the beginning:

> What is AI and what is not AI is, to some extent, a matter of definition.

It is by the general and widely used definition that AlphaGo is actually an AI. It just happens to solve one very specific problem very well, making it a weak AI.

> Again, the rapid advances of deep learning and the recent success of this kind of AI at games like Go are very good news because they could lead to lots of really useful applications in medical research, industry, environmental preservation, and many other areas. But this is only one part of the problem, as I’ve tried to show here. I don’t believe deep learning is the silver bullet that will get us to true AI, in the sense of a machine that is able to learn to live in the world, interact naturally with us, understand deeply the complexity of our emotions and cultural biases, and ultimately help us to make a better world.

The piece was written at a time when the world was much more interested in what AlphaGo was doing and once again people were perhaps getting too excited about AI, but it really doesn't hold much value today. It may have been good to extinguish some of the misguided enthusiasm people were exhibiting back then, but no one is even talking about AlphaGo at the moment and there is little to no value in discussing what it means for strong AI anymore.


Speaking as an AI researcher, almost nothing is "AI" research. In practice i feel most current AI research falls into two categories:

* Fuzzy problems -- image, sound and free text recognition. Where there is no real "true answer".

* Problems too hard to solve in a reasonable time without heuristics -- SAT, scheduling, etc. In practice NP-hard problems and further up the complexity heirachy -- AlphaGo goes here.

Once we know how to do something reliably, it stops being AI and just becomes "an algorithm" :)


Agree. Feng-hsiung Hsu, the chief designer of Deep Blue, the chess machine that beat Kasparov, once gave a talk at my university. I didn't go to the talk, but in the abstract, he stated clearly that he didn't think Deep Blue is AI -- it's just a search algorithm.

I don't understand why so many commenters here are so furious about the author's claim that AlphaGo is not AI. What qualifies as AI is really up to definition, and there doesn't seem to be any widely accepted one among AI researchers. The author of the IEEE article doesn't think AlphaGo matches his definition of AI. Other AI researchers may think otherwise. But that doesn't mean the author is trying to dismiss the achievement of the AlphaGo team.


>I don't understand why so many commenters here are so furious about the author's claim that AlphaGo is not AI.

I don't think they really are. I think they are annoyed because nobody (who knows the difference) ever claimed that AlphaGo was strong/true/general AI) - and the article feels like it's swatting at strawmen.


> Once we know how to do something reliably, it stops being AI and just becomes "an algorithm" :)

This.

AI is a moving definition and always seems to be "What we can't do right now."


is there any research into making a "generic" AI that can solve any problem without the researcher first having to know what that problem is? i.e., human style learning.


Certainly some very clever people are trying it, and have been since the 60s at least, but as far as i am aware the progress is limited (but it is not my area of expertise).

I think the main problem is you end up needing a language to describe the problem, and that ends up limiting the problems that can be solved, or you have to explain so carefully what the problem is it feels like cheating.


As a researcher in multiagent systems, I know this problem very well. Which is, I believe, exactly the point the article makes. But it goes on to say that this might be possible to overcome by embodiment and developmental psychology. Not new arguments for sure, but valid ones, and brought at the right time (a year ago) of a new AI hype.


Machine learning algorithms can solve many problems without knowing what they really are, only given examples (sometimes even without labels when unsupervised learning applies).

We are still quite far from anything we could call "human-style learning" though, but definitely getting there (just look at all the recent publications with reinforcement learning and elaborate ways to use memory in neural nets).


I think the parent is referring less to a system that can construct a single model without an explicit schema, and more to the thing that'd be a few steps after that—the ability to:

1. dynamically notice world-features ("instrumental goal features") that seem to correlate with terminal reward signals;

2. build+train entirely new contextual sub-models in response, that "notice" features relevant to activating the instrumental-goal feature;

3. shape goal-planning in terms of exploiting sub-model features to activate instrumental goals, rather than attempting to achieve terminal preferences directly. (And maybe also in terms of discovering sense-data that is "surprising" to the N most-useful sub-models.)

In other words, the AI should be able to interact with reward-stimuli at least as well as Pavlov's dog.

Right now, ML research does include the concept of "general game-playing" agents—but AFAIK, these agents are only expected to ever play one game per instance of the agent, with the generality being in how the same algorithm can become good at different games when "born into" different environments.

Humans (most animals, really) can become good at far more than a single game, because biological minds seem to build contextual models, that communicate with—but don't interfere with—the functioning of the terminal-preference-trained model.

So: is anyone trying to build an AI that can 1. learn that treats are tasty, and then 2. learn to play an unlimited number of games for treats, at least as well as a not-especially-smart dog?


Not entirely sure that is a fair description of human-style learning. Our overall problem to solve is 'survive and reproduce'. Anything else can be seen as just a sub-problem of that. Humans are taught by other humans how to solve problems since the day they are born. Our DNA passes on a millions of generations worth of learning from our ancestors about how to solve problems.


It is a fair description. That being: able to enumerate a large number of arbitrary goals and define a large number of basic pattern classifiers/feature extractors.

When people give credit to the human designers for AlphaGo's wins, that it is really a win for humanity, I disagree. The wins are alphago's even if the design is of human ingenuity.

When You say that the outputs of human ingenuity should be credited to Evolution, I similarly disagree. You might as well credit evolution for AlphaGo's win. While it is true that Evolution invented the first AGI (and in some though not all ways, a superior intelligence to it), it still makes sense to separate the products of human learning from whatever structural priors DNA passed along. I'll also point out that compared to most animals, humans actually have weaker priors and spend a lot of their early days learning to learn.


Tangential quibble: For anything that could be called "human-style" learning, which presumably requires abstract intelligence and cultural transmission, we're probably looking at something that has existed only during the period of human behavioral modernity - commonly taken to date from roughly 40-50,000 years ago. Assuming a generation is ~20 years (and there are some arguments that the average generation might have been closer to 25 years), that's only 2500 generations.

I think it's fascinating that we have developed from un-self-reflective animals, to abstract thinkers on the verge of creating wholly new abstract-thinking entities from scratch, in only two and half thousand steps. Especially given that the majority of the technical knowledge necessary was developed only in the last 500 years, or 25 generations.


Humans need just a bit more supervision than that.


> It is by the general and widely used definition that AlphaGo is actually an AI.

Only for a little while. AI is how we describe new technology that does something we think of as requiring human intelligence. AI, like the word "technology" itself, is a term we use during the lag between creating something and learning to take it for granted.

I don't foresee an end to technological development, but I do think eventually there will be no AI, because our ability to intuitively appreciate intelligence doesn't much surpass our own. Beyond some point we will stop perceiving new technological capabilities as higher levels of intelligence. There will be people, computers, appliances, services not tied to individual machines, and genetically engineered rabbit tutors for our children, and nobody will be surprised to encounter intelligence anywhere, least of all in a computer.


When some people say "AI" what they really mean is "Strong AI"


I think people think of it more along the lines of an artificial consciousness.

Which since consciousness is likely an emerging property of our own biological neural net and training data/weights, would likely have similar issues that humans have when exposed to bad data while growing up.


> our own biological neural net and training data/weights

Well that shoots down my "minds are made of mechanical internal drives/pistons/levers" theory.


I'm some people and what I mean by AI is artificial intelligence. If needed I can be more precise with Weak / Strong / AGI or ASI.


So now you speak for everyone, and define their language usage independent of their intended connotations? Quite presumptuous.


You are some other people, then.


What's a "weak AI"? Does any algorithm that is able to adapt its behavior by learning qualify as "weak AI"? If so, why call it AI at all?


"the capability of a machine to imitate intelligent human behavior"

That's what I would define as weak AI. So yes, even algorithms which do not adapt their behaviors by learning, but can imitate intelligent human behavior to some extent would be weak AI, such as AI in video games.

Now strong AI would be: "the capability of a machine to originate intelligent human behavior"

This means the intelligent behavior would originate from the machine itself. Some Machine Learning algorithms exhibit strong AI, in that they find ways to even outperform a human, by themselves, through the learning process. Sometimes, we can't even understand why it chose to do what it did, but it appeared to be even more intelligent than what most human would have chosen to do themselves.

Now Artificial General Intelligence, or sometimes called strong AI also, but I think there needs to be a distinction as I explain before. This would be: "the capability of a machine to originate any intelligent human behavior"

The reason a lot of people are talking about true AI, AGI, strong AI, etc. today, is because Machine Learning is seen as the be all, end all of AI. Yet practitioners and scientists know that it is not, at least in its current form. Currently, it can solve simple decision tasks, probably the kind of decisions a person makes in under a second or two, like driving. But it does not solve them generally, so a person must train different models for each tasks.

At the same time, having a way to automate decision making of a single simple task is huge in terms of potential. There's so many things we can automate through this, and if we combine them, you can create even more complex autonomous machines. That's why people are excited. Though I understand that it is not as awesome as having found general AI, or AI which can make complex decisions. The dream of AI continues, but the practicality of it are closer then ever.


> So yes, even algorithms which do not adapt their behaviors by learning, but can imitate intelligent human behavior to some extent would be weak AI

Replacing various human thought capabilities has been the goal of computers and algorithms since Turing and von Neumann.

> in that they find ways to even outperform a human, by themselves, through the learning process

Computers have been outperforming humans in many tasks since the moment of their creation. Some animals learn some things better than humans. Are they more intelligent?

> Machine Learning is seen as the be all, end all of AI.

I think it is seen as that mostly by non-experts, and generally by those who just want to believe we're close.

> At the same time, having a way to automate decision making of a single simple task is huge in terms of potential. There's so many things we can automate through this, and if we combine them, you can create even more complex autonomous machines.

Sure, I just don't see what any of this has to do with intelligence. Bacteria are autonomous, and so is the weather. The universe, too, for that matter. None of those is particularly intelligent.


Yeah, exactly. We call it AI because we need a name for this kind of adaptive algorithm, even if it isn't anything close to strong AI


There are JIT compilers, and even sorting algorithms that are adaptive. Are they AI too? What's wrong with the name "statistical learning" (as that's the flavor du jour of so called AI anyway)?


I think the word "adaptive" is meant to mean "general" learning that can be applied to more behavior.

AlphaGo was trained for blackgammon, could it be given chess and learn it just as well? The answer is no.

If I had to personally define AI, I would give a machine input to limbs of robot, and output of cameras and microphones (eyes and ears). And I would see if given sufficient years, can it learn language and conversation and ability to use robotic body for sports.


No, that is totally ignoring the point of the article due to it's semantics.

The point the article makes is that a general intelligence (or however you wanna call something that has intelligent function like a human) is not attained by AlphaGo. It might help but it is only part of the puzzle. There is work needed in other directions.


One of the co-founders of DeepMind (Shane Legg) has provided the following definition of intelligence: "Intelligence measures an agent’s ability to achieve goals in a wide range of environments" in [1]. This definition has been pretty influential, and in the sense of this definition, AlphaGo is not AI. But it is a great step towards AI.

[1] S. Legg, M. Hutter, Universal Intelligence: A Definition of Machine Intelligence. https://arxiv.org/abs/0712.3329


This definition has the (unintentional?) effect of defining-away human intelligence. By the time a machine can demonstrate "enough" achievement in different domains to be called intelligent, the same high bar would disqualify any actual human being. If "a machine has to master backgammon, chess and poker, not just be a world champion at one of them" is a prerequisite for intelligence, then I don't think that any one human can demonstrate intelligence either.

Consider AI as a newly discovered species. If you were trying to discern if a previously unknown cetacean were intelligent, or if life discovered on a distant planet were intelligent, would you only say "intelligence discovered!" after it equaled-or-surpassed human performance on many or most kinds of thinking historically valued by humans? I wouldn't. I think that AI is already here and that the people waiting for artificial general intelligence will keep raising the bar and shifting the goalposts long after "boring" narrow AI has economically out-competed half the human population.


I think the quote does a great job at explaining why a lot of people have been critical of labeling the recent breakthroughs in Machine learning as real intelligence. Most people define intelligence in comparison to humans. Things like being the best GO player in the world are so specialized that they don't seem very human at all.

Most people will not be impressed by a machine that can master backgammon, chess and poker, despite it being a great technical feat. They would be impressed by one that can successfully teach a 5th grade math class, even though there are hundreds of thousands of people who can do this.

This would require more than teaching the kids math, but also how to deal with the kid who loses a parent during the school year. How to deal with bullying in the class, with misbehaving students. None of it is "specialized knowledge" like playing GO. And we are nowhere even remotely close to this.


I used to think that the rarity of human mastery of games, and games' abstraction from the physical world, were what prevented most humans from perceiving machine intelligence as intelligence. Then the 2005 DARPA Grand Challenge for self-driving vehicles showed that machines could perform a task that most adult Americans can perform, that no non-human animals have ever been taught to perform, and that requires significant awareness of the physical world. But AFAICT it didn't cause a sea change in how most people think about intelligence, human and otherwise.

There has been an uptick in people pondering the economic implications of driverless vehicles and a more robotic future. That discussion seems kind of oddly isolated from re-considering the nature of intelligence, human and otherwise. It's as if after the Industrial Revolution people kept narrowly scoping the meaning of "power" to "muscle power" rather than acknowledging mechanical forms. Oh, yes, that coal fired pump can remove water from the mine faster than I can... but it just uses clever tricks for faking power.


> a task that most adult Americans can perform, that no non-human animals have ever been taught to perform

Woah wait what? Non-human animals successfully navigate >7 miles of mountain terrain all the time.

Machine intelligence doesn't seem like "real" intelligence because it just doesn't seem as generalizable. Taking the engines and hydraulics used to great effect in water pumps and applying them to construction cranes required engineering work, sure, but no new physics. But you can't just take the convolutional neural nets that are breaking new ground in computer vision and apply them to natural language processing, you need new computer science research to develop long short-term memory networks.

The cool thing about AlphaGo, from my understanding, was that it was able to train the deep learning-based heuristics for board evaluation by playing a ton of games against itself. This is especially awesome because those heuristics are (were?) our main edge over machines [1]. But in CV and NLP, playing against yourself isn't really a thing, so again, this work doesn't automatically generalize the way engines and hydraulics did.

[1]: https://en.wikipedia.org/wiki/Anti-computer_tactics


In a sense, defining away human intelligence is the whole point. We have no servicable definition of intelligence other than a behaviourist black box embodied by humans. From the perpective of AI-research any progress will chip away bits from the intelligence black-box and still the resulting AI will be out of reach of "true" intelligence. Since we have little knowledge of the internal structure of "intelligence" as a phenomenon, we can't tell how much is left to discover about it.

From the other end, the merits of production AI will be measured against isolated properties pulled out of the intellgence black-box, and since intelligence remains undefined (in a strict ontological sense), debate ensues.


>We have no servicable definition of intelligence other than a behaviourist black box embodied by humans.

I believe cognitive scientists have a considerably better idea than that.


Yes they each have a better idea, but the intersection of their proposals is pretty much just that. Why do you think they still make a big fuss over the Turing Test?


I have never heard cognitive scientists make a big fuss over the Turing Test. Who've you been reading/watching?


Good old Doug makes a little fuss here: https://books.google.no/books?id=qa85DgAAQBAJ&lpg=PT428&ots=...

When I said "make a big fuss" i didn't mean "herald it as the gold standard", but they do go on about it even though they might disagree about what it utlimately signifies.


> Intelligence measures an agent’s ability to achieve goals in a wide range of environments

A tardigrade achieves its goals in a wide range of environments; I don't think anyone would call it intelligent. I've known quite a few 15-year-old juvenile delinquents that I'm certain were able to achieve their goals in a much wider range of environments than Albert Einstein; were they all much more intelligent than Einstein?

> But it is a great step towards AI.

How do you know how big a step it is? None of our learning algorithms has yet to achieve even an insect's level of intelligence (which would be normally considered zero or close to it). How do you know we're even on the right path? I mean, I have no real reason to doubt that we'll get AI sometime in the future, but the belief -- certainty even -- that AI is imminent has been with us for about sixty years now.


I would absolutely classify a tardigrade as an intelligent system. Systems of parts interacting in an intelligent way applies to more than just networks of nerve cells.


You've made a circular definition. And I wasn't claiming that only nerve cells can form intelligence, but I think that if we expand the definition beyond what people normally call intelligent, then we should be more precise than the vague "ability to achieve goals in a wide range of environments" or else every complex adaptive system would be called intelligent, in which case we can drop that name because we already have one: complex adaptive systems (https://en.wikipedia.org/wiki/Complex_adaptive_system)


I wasn't to give a definition. Just refuting your claim that nobody would consider a tardigrade intelligent.


But if a tardigrade is intelligent then so are bacteria and even viruses. While you can define intelligence how you like, I don't think this coincides with common usage. In fact, you've extended the definition so much that it just means "life" or even a complex adaptive system (https://en.wikipedia.org/wiki/Complex_adaptive_system). If by intelligent you mean "alive" or "complex adaptive system", why use the word intelligent?

Also, if intelligence means "achieving goals in a wide range of environments", then I think some computer viruses are far more intelligent -- by this definition -- than even the most advanced machine-learning software to date.


In my view, part of the fundamental problem of current AI approaches for tackling general intelligence is the inability to collate and organize different systems and algorithms together in a coherent manner.

To use a flawed but useful mode of thinking: we've succeeded at codifying crystallized intelligent systems (far beyond human means), we've made some progress on fluid intelligent systems (at worst random search), but we have not managed to make any progress on the mediator of the two. We can build a million different single purpose neural networks, but we have so few meta-classifiers for knowing which network to use at which time (and can also change track their own self-modification). Watson is possibly the closest we have at this time, but it seems to work on static information which it cannot update or improve on itself.


I don't understand the critics this article receive here. It makes a very good counterpoint to all the hype the AI field is living at the moment, letting people from the general audience believe that algorithms will replace humans quite soon.

You may think it doesn't matter, but we've candidates to the presidential elections here in france building economical programs assuming "robots will do everything soon, so let's provide everybody with a universal revenue" as a likely immediate future.

It is extremely healthy to let people understand all we've been able to build so far are number crunching machines, which absolutely don't give any kind of meaning to the numbers they see passing by. Meaning as : i see what this number represents, and the consequences it has in real people's context.


If the mind is that what a physical brain does, then a physical brain can in principle be simulated - by crunching numbers - to produce a mind (some have claimed that there must be more to it than that, but none have come up with a generally-accepted justification for their position.)


I don't think the mainstream of pragmatic Machine Learning/AI (deep neural networks, reenforcement learning, etc) is really working toward that goal. However there are some who are trying to create a "software simulation of the biological brain", Numenta being one. I think the limiting factor there is current instrumentation for use in neurological study.


That's another discussion. The algorithm we're building right now aren't simulateors of consciousness or thought or anything usually attributed to intelligence. They're not even trying to, and the theoretical foundations they're built on aren't theories of consciousness, but pure statistics and signal processing.


True, but it does not follow from them being number-crunching machines; that is not a useful characterization in this context.


> all we've been able to build so far are number crunching machines, which absolutely don't give any kind of meaning to the numbers they see passing by

All humans are just protein factories, which don't have any meaning other than electrical pulses and chemical reactions.


Apart from the fact that the OP was talking about giving meaning (Chinese room, anyone?), not having meaning, I think that too is up for debate.

For sure there's no clear, agreed, universal meaning, and most likely it does not have a meaning to you. Not everyone thinks like that, though.

edit: [OT] "have a meaning to you" or "for you"? I have the impression that there's a slightly different interpretation, and one might be offensive (not intended). Or are they the same?


This is how I see it:

If the machine comes up with the decision making logic, it's AI, if it does not, it's not AI.

A sorting algorithm, in most cases, is a description of the logic to decide how to proceed at each step to accomplish the task of sorting. The computer merely executes that logic for us very very quickly.

On the other hand, self driving machine learning algorithms come up with the logic of how to base the decision of when to steer left/right, to what degree, when to accelerate and decelerate, to what degree, all by themselves. Often time, we can't even formalize the logic they use, we can only test to see how well it performs. This is AI. The computer originated intelligence, it did not simply execute it.

Within that, there is a spectrum which even human exhibit. I cannot come up with logic to solve all tasks. For most tasks, I would need a good teacher and a lot of practice before I can find on my own logic that can solve a task, even so slightly, depending on the difficulty of the task at hand.

I'd say there's only one exception to this rule. Sometimes it is useful to call something which appears intelligent AI. Like in most video games, the computer is actually simply executing the logic a programmer told it about, but to most players, you will believe as if it is coming up with it on its own. Most often, it does not and did not, a programmer came up with it, but you're fooled into thinking otherwise, and the computer appears sentient. This is not true intelligence, but being able to use logic to make decision, even if that logic did not originate from you, is probably a part of intelligence. In a sense, if I could program a machine to perform all tasks, even though it uses the logic that I came up with, that machine would appear quite useful. It would only fail when encountering a task it has never seen, or a condition in a task that's not accounted for in my logic. Machine learning would not fail at those, in that, it could re-learn from the new condition, and come up with a different way to solve the task.


Relevant Hofstadter quote:

> Sometimes it seems as though each new step towards AI, rather than producing something which everyone agrees is real intelligence, merely reveals what real intelligence is not.


I really don't understand why all this pedantic squabbling over terminology. Nobody has suggested that AlphaGo is AGI or human-level intelligence. But it is intelligent. Just like in humans or any other organism, intelligence is a spectrum. Artificial intelligence is no different. I think even the laymen understands that AlphaGo is not a sentient being inside a computer, just like people understood the same about DeepBlue.


> intelligence is a spectrum

To me, saying algorithms are on the intelligence spectrum is like saying the weather report is on the humour spectrum. It's true but ultimately meaningless, because neither algorithms nor the weather report exhibit the interesting properties of those spectrums.


OK, so we have AIs that are capable of solving a very specific problem. My questions is, are AI's capable of being composed? In the sense that if I have 1000 AIs that are each experts at performing their domain specific tasks, can one or more AIs be built on top of them that specialize almost exclusively in delegation? i.e. they delegate intelligently depending on the task at hand. If that's possible, then we could in theory build a generalized AI, right?


A simple decision tree could suffice. Is it a speech problem, is it an image problem? Is it a classification problem? Failing to find an acceptable answer, it does what seems closest or makes a random but plausible decision. This doesn't seem too far from what humans would be doing. Part of the problem is the meta-recognition of the problem and how to organize that within an actual system.


It's not good to abuse of the analogies, but the human brain have to spend more time in task with conflicting information, for example to read the word "yellow" when it's written with blue ink, ... More details: https://en.wikipedia.org/wiki/Stroop_effect


How old is the debate about what is really AI? A program called _Chess_ written in 1970s by Larry Atkin and David Slate at Northwestern did manage to win one game from David Levy. See https://en.wikipedia.org/wiki/Chess_(Northwestern_University...

Atkin often complained about the characterization of their program being AI, saying (paraphrasing here) "It isn't AI. It is just good software engineering."


If a chimp could play the game of go at the same level as humans, would we say the chimp had intelligence? When you base your definition of intelligence on sample size of 1 (humans), but can't even define within that context, you get these kinds of problems. Is platypus a mammal or not? Does the classification change the function? Are all humans intelligent or posses natural intelligence? Is there a degree of intelligence? i.e. are some humans in possession of more intelligence than others?

I look at artificial intelligence as an umbrella term which has varying meaning depending on context. If you classify a task as needing intelligence to solve and someone finds a piece of code to do it, you have an artificial intelligence there. You can keep moving the goal post but it's only nuances and definition.


There are problems where humans need to use intelligence, but which are amenable to brute-force computing - chess is an example - and that does not, in my view, make brute-force computing class as intelligence. It does not accord with what the term AI originally meant, and the first moving of the goal post was to apply it to these cases.


Defining AI is hard - we can agree on that. I think the best definitions are in the form of tests that match the performance against organisms that have developed intelligence through millions of years of evolution. The Turning Test is a good example of this. I think that we can all agree that an AI that passes the Turing Test is intelligent. But that is the final goal post. What about intermediate milestones? My opinion is that games just don't count.

I think that creating similar test as milestones would do more to progress the AI field. Like the Turning Test, it must test that parity with another intelligent being is achieved. Intelligence is a spectrum from "not very" to "omniscient". What are some organisms that are closer to the "not very" end of this spectrum that would be good candidates for parity tests? How would a parity test be performed?

It is in answering that question that one arrives at “There is no AI without robotics”. If you want to demonstrate that your artificial cockroach is as intelligent as the real thing, then you have to build a robot. It doesn't have to be a physical robot as long as you have a simulator that can do a very accurate simulations of the physics of the environment, the AI candidate, and their interactions. That's not so easy, which is why many if not most researchers build physical robots.

Would a cockroach be a good first milestone? Too complex? How about a worm? Achieved parity with cockroach? Then move on to Jumping Spiders. When your great-grandchildren achieve that parity, my great-grandchildren will certainly celebrate with them.


IMO there are only 2 problems in machine learning (ML):

1 Regression - Given a set of vector pairs Xi and Yi, find a function that maps each Xi to Yi minimizing some objective function. 2 Path Finding - Given some topology (typically on some regression output) find the "best" path, given some objective function.

All other problems (classification, clustering, etc) can be reposed as a combination of these two.

Viewed in this way, an "AI" system set on any reasonable test of intelligence would score super-human (defined against the average adult human) with the current set of algos, with one caveat.

If we think of operating systems in say the 90s we would mostly agree that few innovations (evolutions as opposed to revolutions) have occurred when compared to the OSs we have today. It's the development, integration, testing, tools, hardware support, etc that have taken all this time.

So, to the article's point, the AlphaGo system does these 2 things pretty well just as Windows95 did things pretty well. Yet even today I have to restart my computer every time there is the most trivial of updates.

TL;DR Super human Audio/Visual/NLP ML == AI is both here now and a long way off.

One could probably cut this to one but stating it would be very convoluted. Also "pathing" problems are typically non-convex and real-time whereas regression problems are typically not.


One problem with defining what is and isn't AI in the current age is that as something approaching "real" AI - rather than Hollywood AI - enters the public consciousness, it is seized upon by marketers wanting to attach the new buzzword to their products, effectively diluting the term in the process.

Things that a decade ago would have been described as performing regression analysis, and just last year doing machine learning, may be described this year as being powered by AI.


Margaret Boden made these points previously. http://www.sussex.ac.uk/profiles/276

A fun alternative strand in AI research driven by these thoughts (I think) is typified by Rosie. Rosie is the proper job.

http://soar.eecs.umich.edu/articles/articles/videos-rosie

There is an unseemly comment in this thread that alludes to a genuine issue with the embodied AI arguement. People with other bodies are not lesser intelligences than those who have a socially cast "typical" body or sensors. And given that one has to question this line of reasoning. The answer is that intelligence is embodied outside and beyond your physical being - it's your body, your family, your tribe and so on. I find the philosophy difficult, but to prove that there is something in this view you need to hike out to a wood somewhere and spend a couple of weeks by yourself. Things happen and who you are changes (did for me) cut yourself off or be cut off and you are as less human (autonomous cognitive agent) as one can become.


While I acccept Alpha Go as 'real AI' (I watched the match with Lee in realtime, and years ago I wrote and marketed my Go playing program 'Honninbo Warrior') I agree that Alpha Go is not a general purpose AI.

Deep learning has been personally career/interests changing, but I still think that we should be building hybrid neural network / symbolic AI systems.


This whole post can be (unintentionally) TL;DRed by a single Wikipedia article: https://en.m.wikipedia.org/wiki/AI_effect

The phenomenon of people discounting significant advances as "not AI" is not new. I'm surprised it hasn't gotten old yet.


> The phenomenon of people discounting significant advances as "not AI" is not new. I'm surprised it hasn't gotten old yet.

It hasn't gotten old because people keep calling non-AI AI. I have so far refrained from correcting people because I see it like the word "hacker", which on Hacker News is widely understood in its correct meaning, but is commonly used to refer to computer criminals. I've given up long ago on correcting people to use the word "crackers" rather than "hackers". But since it has been brought up...

Artificial Intelligence refers to something that behaves intelligently and can adapt to new situations. I don't think human brains are magic or divine (I was surprised to learn this is not a universal opinion), so in that sense we're all "just a computation". In that sense the simplest pocket calculator is a thinking machine since it can take an almost infinite number of inputs and respond to it logically. But everyone with (artificial) intelligence understands that this is not what is meant by artificial intelligence.

The article that you link claims that every major advance is referred to as "just a computation", but I doubt that people will claim AI is "just a computation" once a computer learns to speak and can have a real conversation at the level of a 3 year old or something.


The term AI researchers have used to describe what you are talking about is Artifical General Intelligence. And they've made a distinction because there is no reason an advancement over human capabilities shouldn't be called AI. Sure, learning how to play Go better than any human alive doesn't mean that AlphaGo can learn and interpret languages, but that doesn't mean it's not intelligent. Even human intelligence is not homogenous in its capabilities.

It's obviously speculative, but I'm of the belief that AGI will never exist the way fanciful minds imagine it. Rather AI will evolve slowly, and systems will incorporate small advances one by one until one day we don't really care to distinguish between AI and AGI. It will literally just be a collection of capabilities ("just a computation") that are well understood and have lost all semblance of magic.


like the word "hacker", which on Hacker News is widely understood in its correct meaning

You'd be surprised. Here's part of just such a conversation for a couple of years ago; I've always thought "hacker" without qualifiers meant "person who can code"

https://news.ycombinator.com/item?id=9790316

Other terms commonly used here with wildly different meanings being applied include "liberal", "feminism" and "capitalism". Screaming matches abound... :)


I was thinking the same while typing that (that due to being somewhat mainstream, "Hacker" News might not be understood for what it is). I personally use the catb.org definition[1], which is quite broad and lists 8 definitions -- if you count the 'deprecated' use of referring to 'computer criminal', which I do count as one of the popular definitions.

[1] http://catb.org/jargon/html/H/hacker.html


If there's an "AI effect" surely its inverse also exists; it seems to me that every time there is a [relatively] major advancement, a similar chorus of supporters pop up to tell us how we're so close to solving every AI problem imaginable. In recent memory there's Google's NMT, which had people effectively saying it's "solved" translation, but when I got to actually feed something into it, as a competent speaker of Japanese and former translator, I can only say that I was wholly unimpressed, relative to the hype at least.

Discounting AI advances is nothing new, but upselling them is at least as old, as well.


I completely agree. Deep learning has been doing this most recently and I'd argue that the very existence of AI winters is really just deflated expectations that always were overly ambitious.

My own way to look at AI winters is as triumph of Scruffy pragmatism over Neat delusions. Intelligence is unimaginably complicated, there is no reason to believe that silver bullet algorithms even exist, let alone the probability that the latest fad is it.



Regarding the definition of AI, I find lecture notes for intro to AI courses in various university [1] very useful:

> Views of AI fall into four categories:

> Thinking humanly - Cognitive Modeling

> Thinking rationally - “Laws of Thought”

> Acting humanly - Turing Test

> Acting rationally - Rational Agent

> The textbook advocates "acting rationally"

Indeed most of the things we see today in AI community is solving the problem of "acting rationally" whereas the other categories have their respective fields (Cognitive Science, Cognitive Neuroscience, Philosophy) that are no longer closely related to computer science.

[1] http://homepage.cs.uiowa.edu/~hzhang/c145/notes/m1-intro-6p....


I would say that if your paradigm for studying thought cannot explain away at least two of the other three, you've got it wrong. A good theory in cognitive modeling ought to have applications to all the rest. A good theory of acting rationally ought to have applications to acting humanly or thinking rationally, and ideally to thinking humanly as well.


AlphaGo is not AI in exactly the same way that Aldebaran robots are not robots. Seriously, what do those robots do? They are just one step above Disneyland's pre-programmed animatronics. If this dude wants me to take him seriously, he needs to show me a robot that solves a problem for somebody. Nao is just a waste of perfectly good servos. Or, alternatively, if he insists that Nao is a robot, then I think AlphaGo can be considered AI.

People have been saying "___ is not AI" since Claude Shannon built a robot mouse to solve a maze. Then Shannon's chess playing program was "not AI". Humbug.


> People have been saying "___ is not AI" since Claude Shannon built a robot mouse to solve a maze. Then Shannon's chess playing program was "not AI". Humbug.

AI researchers say this because they don't want the hype to lead investors putting money in the wrong places, which could cause another AI winter.

OpenAI is one example. Fortunately, that's a non-profit. If it were profit seeking and they had a large investment that went belly up, it might cause the industry to tank.

That bust will probably happen again in the future. The "scrooges" are just trying to forestall it.


> Culture is the essential catalyst of intelligence

I liked the article on many fronts and thought it tried to justify most of its claims. However, the quoted claim goes unsupported.

Does anyone know of justifications for the quoted claim?


It wud be beneficial to take a pramagtic approach ... AI or not ... as long as the end product helps human conditions ... we just use ... similar debates happened so many times in the past ... there is really no point to divert the attention ... for the people who know, it is clear the title is a click bait ... for the people who don't know .. they find out this anyway after they click ..


Its more like AlphaGo is a PIECE of strong AI. Deep learning, as its implemented today, is probably not the secret sauce of consciousness, but it is a powerful and useful subsystem. In that sense, it is a tangible step towards the eventual goal of general strong AI.


There's an old AI Winter saying: "If it works, it's not AI" ;)



are "we" the result of hardcoded DNA or the result of learning? Our objective of survival is kind of hardcoded objective we optimize our life on. Why do we care about if we can build another "intelligence" similar to ours? Why not just build something good at jobs we don't want to do, and leave "culture" to us. I strongly believe a higher being will laugh to see we call ourself intelligent the same way we see artificial intelligence.


This is a pointless argument, we all know AI is all about thought. If a person can't move does he still have intelligence?


Whilst I don't like the author's way of phrasing interaction with real world information as "fundamentally tied to robotics", humans whose range of movement is restricted to scrunching up one cheek have won Nobel prizes because even with their physiological limitations they've still had access to and assimilated information vastly more complex and unstructured than board positions for the game of Go in order to be able to devise their own considerably more complex and varied end goals and means of communication to reach them. Whilst a computer which can choose hundred of legal Go positions at each turn might actually have more output channels than Stephen Hawking, it hasn't had and doesn't have the range of inputs to be in a position to contemplate alternative end goals such as dictating books on theoretical physics or ordering a cheeseburger (let alone the ability to learn how to be understood by sympathetic non-computers enough to overcome its physical limitations)

One can turn it around: if an organism has no means of assessing or communicating with the outside world bar the ability to send and receive electrical signals so simple they can be mapped to the game of Go and whose survival to the next generation depends on electrical signals combining in various rare patterns hardcoded into its DNA (which could be mapped to the win conditions for the game of Go), would we even consider the possibility of such an organism being intelligent simply because its route to achieving those patterns was incomprehensible to and could not be deactivated by counter-stimuli from the humans that studied it? Would such a theoretical organism even need to be composed of more than one cell?


Title: "Why AlphaGo Is Not AI"

Actual argument: "Is [AlphaGo] going to get us to full AI, in the sense of an artificial general intelligence, or AGI, machine? Not quite"

Conclusion: "the rapid advances of deep learning and the recent success of this kind of AI at games like Go"

So not only the article discusses something different from the headline, it directly contradicts the headline by calling AlphaGo a type of AI.

Bad journalism.


In a hundred years, google is going to be arguing with Facebook about whether humans are intelligent.


> AlphaGo Is Not AI

And the questioning article is not real writing. It fails to the standards of "real writing" that I uphold. A real article doesn't come with a clickbait title, and is better documented. I don't think the author knows what he's talking about - instead, just expressing incredulity based off his gut feelings.


AlphaGo is an AI, author of this article thinks AI to understand itself and code itself and have sex then reproduce, WTF? AI is Technology and not Biology. AI and Human Brain are not the same. This is like saying AI to take birth by itself without human programming it.


Of course it's not really AI; it works.


I guess that makes AlphaGo an AIdiot savant?


BS. It is AI that this AI can only play Go.


Good to know that quadriplegics are not intelligent, much less sentient.


Please don't make a point this way on HN.


this is one of those clickbaiting articles:

- taylor swift is not really an artist

- paulo coehlo is not really a writer

- javascript is not really programming

- Trump is not really human


Offtopic, but why do people design websites this [1] way? The content isn't even a fourth of the page width, then the rest of the page is filled with random articles, newsletter signup and follow buttons.

The topic sounds interesting, but the presentation is incredibly off-putting.

1: http://i.imgur.com/Obxuj9A.jpg




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: