
AlphaGo Is Not AI - henrik_w
http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/why-alphago-is-not-ai
======
vonnik
The headline for this piece is clickbait. AlphaGo _is_ AI, it's just not
strong AI. No AI is strong AI for the moment, but that doesn't mean that the
AI we have is uninteresting or a dead end, as the author seems to think.

The author and his editors deliberately conflate the two terms in the
headline, and clarify after about a paragraph. Sure, AlphaGo is not strong AI.
And Abraham Lincoln is dead. And there are lots of other things everyone knows
that don't deserve to be in the news.

Jean-Christophe Baillie wrote this piece. He has done some work in AI and
computer vision, and he had the opportunity here to write a piece that
reflected that expertise. Instead he made a rather facile point that is
similar to saying "Graffiti isn't art." That argument won't lead anywhere
interesting.

He rehashes the Chomsky-ian argument about meaning. That AI's won't understand
meaning until they can connect representations to the world. Then he makes the
point that this requires embodiment, that they have a physical body in space
through which they connect to the world. I don't believe either of these are
necessary requirements for AI.

An AI that solves a lot of problems better than humans, and transfers that
learning across many problems with relative ease, is getting close to strong
AI. It doesn't really matter whether the machine constructs meaning to itself
or not. It doesn't have to be humanlike to be strong. Secondly, AIs can train
perfectly well in virtual environments. They don't have to be embodied as
robots to be considered strong. We can model the world with enough complexity
to give AIs problems that, if solved, might define them as strong.

~~~
unityByFreedom
It's not clickbait. You're being pedantic. It's clear the author meant true /
general / strong AI and was merely addressing the hype of some non-AI
researchers.

It's a response to people who think because of AlphaGo that we are on the cusp
of achieving true / strong / general AI.

> An AI that solves a lot of problems better than humans, and transfers that
> learning across many problems with relative ease, is getting close to strong
> AI.

We're not close. See Yann LeCun's statement on AI after AlphaGo [1] and the HN
response [2]

[1]
[https://www.facebook.com/yann.lecun/posts/10153426023477143](https://www.facebook.com/yann.lecun/posts/10153426023477143)

[2]
[https://news.ycombinator.com/item?id=11280744](https://news.ycombinator.com/item?id=11280744)

~~~
psyc
I have always found the idea that there's a hard line between strong and not-
strong AI extremely naive. The whole thing is a long gradient of progress.
Strong AI is going to creep up on most programmers like the proverbial boiling
frog, and internet forums will be full of people denying its existence
throughout that process.

~~~
unityByFreedom
> I have always found the idea that there's a hard line between strong and
> not-strong AI extremely naive

You probably haven't studied much or any AI. In my opinion, those who haven't
researched a subject don't understand the details and are likely to make more
inaccurate predictions than experts.

~~~
psyc
What a randomly self-serving assumption to make.

~~~
randcraw
I can't agree. If we can blur the difference between weak and strong AI, that
'transitional form' (missing link?) will be enormously important to the future
of AI (and mankind).

In the past 50 years, AI has seen hundreds of small successes in narrow tasks
that used to require humans. But none so far has shown the potential to scale
up, generalize, and serve tasks other than the narrow one for which it was
designed. Like IBM Watson, AlphaGo too is likely to be consigned to the AI
scrapheap in the sky.

BUT... the deep net technique used by AlphaGo shows more promise to solve the
remaining unsolved AI tasks than any AI method before it. Yes, we still don't
know DL's limits, like whether it can integrate one-shot learning, or build
and reuse a diverse knowledgebase, or transfer specific methods to solve new
more general problems. But as of right now, it's shown greater promise to
solve novel weak AI tasks than any past technique I've seen. The author
overlooks that potential deliberately and provocatively, and IMO, pointlessly.

Can DL scale up into strong AI too? I think the important thing here isn't
that the answer isn't obviously yes (as the author posits), but that the
answer isn't obviously no. And in the 50+ year quest for strong AI, that's a
first, at least for me.

------
59nadir
The piece really makes the only point worth making in the beginning:

> What is AI and what is not AI is, to some extent, a matter of definition.

It is by the general and widely used definition that AlphaGo is actually an
AI. It just happens to solve one very specific problem very well, making it a
weak AI.

> Again, the rapid advances of deep learning and the recent success of this
> kind of AI at games like Go are very good news because they could lead to
> lots of really useful applications in medical research, industry,
> environmental preservation, and many other areas. But this is only one part
> of the problem, as I’ve tried to show here. I don’t believe deep learning is
> the silver bullet that will get us to true AI, in the sense of a machine
> that is able to learn to live in the world, interact naturally with us,
> understand deeply the complexity of our emotions and cultural biases, and
> ultimately help us to make a better world.

The piece was written at a time when the world was much more interested in
what AlphaGo was doing and once again people were perhaps getting too excited
about AI, but it really doesn't hold much value today. It may have been good
to extinguish some of the misguided enthusiasm people were exhibiting back
then, but no one is even talking about AlphaGo at the moment and there is
little to no value in discussing what it means for strong AI anymore.

~~~
CJefferson
Speaking as an AI researcher, almost nothing is "AI" research. In practice i
feel most current AI research falls into two categories:

* Fuzzy problems -- image, sound and free text recognition. Where there is no real "true answer".

* Problems too hard to solve in a reasonable time without heuristics -- SAT, scheduling, etc. In practice NP-hard problems and further up the complexity heirachy -- AlphaGo goes here.

Once we know how to do something reliably, it stops being AI and just becomes
"an algorithm" :)

~~~
chii
is there any research into making a "generic" AI that can solve any problem
without the researcher first having to know what that problem is? i.e., human
style learning.

~~~
jononor
Not entirely sure that is a fair description of human-style learning. Our
overall problem to solve is 'survive and reproduce'. Anything else can be seen
as just a sub-problem of that. Humans are taught by other humans how to solve
problems since the day they are born. Our DNA passes on a millions of
generations worth of learning from our ancestors about how to solve problems.

~~~
Cybiote
It is a fair description. That being: able to enumerate a large number of
arbitrary goals and define a large number of basic pattern classifiers/feature
extractors.

When people give credit to the human designers for AlphaGo's wins, that it is
really a win for humanity, I disagree. The wins are alphago's even if the
design is of human ingenuity.

When You say that the outputs of human ingenuity should be credited to
Evolution, I similarly disagree. You might as well credit evolution for
AlphaGo's win. While it is true that Evolution invented the first AGI (and in
some though not all ways, a superior intelligence to it), it still makes sense
to separate the products of human learning from whatever structural priors DNA
passed along. I'll also point out that compared to most animals, humans
actually have weaker priors and spend a lot of their early days learning to
learn.

------
mafribe
One of the co-founders of DeepMind (Shane Legg) has provided the following
definition of intelligence: "Intelligence measures an agent’s ability to
achieve goals in a wide range of environments" in [1]. This definition has
been pretty influential, and in the sense of this definition, AlphaGo is not
AI. But it is a great step _towards_ AI.

[1] S. Legg, M. Hutter, Universal Intelligence: A Definition of Machine
Intelligence.
[https://arxiv.org/abs/0712.3329](https://arxiv.org/abs/0712.3329)

~~~
philipkglass
This definition has the (unintentional?) effect of defining-away human
intelligence. By the time a machine can demonstrate "enough" achievement in
different domains to be called intelligent, the same high bar would disqualify
any actual human being. If "a machine has to master backgammon, chess _and_
poker, not just be a world champion at one of them" is a prerequisite for
intelligence, then I don't think that any one human can demonstrate
intelligence either.

Consider AI as a newly discovered species. If you were trying to discern if a
previously unknown cetacean were intelligent, or if life discovered on a
distant planet were intelligent, would you _only_ say "intelligence
discovered!" after it equaled-or-surpassed human performance on _many or most_
kinds of thinking historically valued by humans? I wouldn't. I think that AI
is already here and that the people waiting for artificial _general_
intelligence will keep raising the bar and shifting the goalposts long after
"boring" narrow AI has economically out-competed half the human population.

~~~
reckoner2
I think the quote does a great job at explaining why a lot of people have been
critical of labeling the recent breakthroughs in Machine learning as real
intelligence. Most people define intelligence in comparison to humans. Things
like being the best GO player in the world are so specialized that they don't
seem very human at all.

Most people will not be impressed by a machine that can master backgammon,
chess and poker, despite it being a great technical feat. They would be
impressed by one that can successfully teach a 5th grade math class, even
though there are hundreds of thousands of people who can do this.

This would require more than teaching the kids math, but also how to deal with
the kid who loses a parent during the school year. How to deal with bullying
in the class, with misbehaving students. None of it is "specialized knowledge"
like playing GO. And we are nowhere even remotely close to this.

~~~
philipkglass
I used to think that the rarity of human mastery of games, and games'
abstraction from the physical world, were what prevented most humans from
perceiving machine intelligence as intelligence. Then the 2005 DARPA Grand
Challenge for self-driving vehicles showed that machines could perform a task
that most adult Americans can perform, that no non-human animals have ever
been taught to perform, and that requires significant awareness of the
physical world. But AFAICT it didn't cause a sea change in how most people
think about intelligence, human and otherwise.

There has been an uptick in people pondering the economic implications of
driverless vehicles and a more robotic future. That discussion seems kind of
oddly isolated from re-considering the nature of intelligence, human and
otherwise. It's as if after the Industrial Revolution people kept narrowly
scoping the meaning of "power" to "muscle power" rather than acknowledging
mechanical forms. _Oh, yes, that coal fired pump can remove water from the
mine faster than I can... but it just uses clever tricks for faking power._

~~~
laughinghan
> a task that most adult Americans can perform, that no non-human animals have
> ever been taught to perform

Woah wait what? Non-human animals successfully navigate >7 miles of mountain
terrain _all the time_.

Machine intelligence doesn't seem like "real" intelligence because it just
doesn't seem as generalizable. Taking the engines and hydraulics used to great
effect in water pumps and applying them to construction cranes required
engineering work, sure, but no new physics. But you can't just take the
convolutional neural nets that are breaking new ground in computer vision and
apply them to natural language processing, you need new computer science
research to develop long short-term memory networks.

The cool thing about AlphaGo, from my understanding, was that it was able to
train the deep learning-based heuristics for board evaluation by playing a ton
of games against itself. This is especially awesome because those heuristics
are (were?) our main edge over machines [1]. But in CV and NLP, playing
against yourself isn't really a thing, so again, this work doesn't
automatically generalize the way engines and hydraulics did.

[1]: [https://en.wikipedia.org/wiki/Anti-
computer_tactics](https://en.wikipedia.org/wiki/Anti-computer_tactics)

------
gsam
In my view, part of the fundamental problem of current AI approaches for
tackling general intelligence is the inability to collate and organize
different systems and algorithms together in a coherent manner.

To use a flawed but useful mode of thinking: we've succeeded at codifying
crystallized intelligent systems (far beyond human means), we've made some
progress on fluid intelligent systems (at worst random search), but we have
not managed to make any progress on the mediator of the two. We can build a
million different single purpose neural networks, but we have so few meta-
classifiers for knowing which network to use at which time (and can also
change track their own self-modification). Watson is possibly the closest we
have at this time, but it seems to work on static information which it cannot
update or improve on itself.

------
bsaul
I don't understand the critics this article receive here. It makes a very good
counterpoint to all the hype the AI field is living at the moment, letting
people from the general audience believe that algorithms will replace humans
quite soon.

You may think it doesn't matter, but we've candidates to the presidential
elections here in france building economical programs assuming "robots will do
everything soon, so let's provide everybody with a universal revenue" as a
likely immediate future.

It is extremely healthy to let people understand all we've been able to build
so far are number crunching machines, which absolutely don't give any kind of
_meaning_ to the numbers they see passing by. Meaning as : i see what this
number represents, and the consequences it has in real people's context.

~~~
mannykannot
If the mind is that what a physical brain does, then a physical brain can in
principle be simulated - by crunching numbers - to produce a mind (some have
claimed that there must be more to it than that, but none have come up with a
generally-accepted justification for their position.)

~~~
bsaul
That's another discussion. The algorithm we're building right now aren't
simulateors of consciousness or thought or anything usually attributed to
intelligence. They're not even trying to, and the theoretical foundations
they're built on aren't theories of consciousness, but pure statistics and
signal processing.

~~~
mannykannot
True, but it does not follow from them being number-crunching machines; that
is not a useful characterization in this context.

------
didibus
This is how I see it:

If the machine comes up with the decision making logic, it's AI, if it does
not, it's not AI.

A sorting algorithm, in most cases, is a description of the logic to decide
how to proceed at each step to accomplish the task of sorting. The computer
merely executes that logic for us very very quickly.

On the other hand, self driving machine learning algorithms come up with the
logic of how to base the decision of when to steer left/right, to what degree,
when to accelerate and decelerate, to what degree, all by themselves. Often
time, we can't even formalize the logic they use, we can only test to see how
well it performs. This is AI. The computer originated intelligence, it did not
simply execute it.

Within that, there is a spectrum which even human exhibit. I cannot come up
with logic to solve all tasks. For most tasks, I would need a good teacher and
a lot of practice before I can find on my own logic that can solve a task,
even so slightly, depending on the difficulty of the task at hand.

I'd say there's only one exception to this rule. Sometimes it is useful to
call something which appears intelligent AI. Like in most video games, the
computer is actually simply executing the logic a programmer told it about,
but to most players, you will believe as if it is coming up with it on its
own. Most often, it does not and did not, a programmer came up with it, but
you're fooled into thinking otherwise, and the computer appears sentient. This
is not true intelligence, but being able to use logic to make decision, even
if that logic did not originate from you, is probably a part of intelligence.
In a sense, if I could program a machine to perform all tasks, even though it
uses the logic that I came up with, that machine would appear quite useful. It
would only fail when encountering a task it has never seen, or a condition in
a task that's not accounted for in my logic. Machine learning would not fail
at those, in that, it could re-learn from the new condition, and come up with
a different way to solve the task.

------
adzm
Relevant Hofstadter quote:

> Sometimes it seems as though each new step towards AI, rather than producing
> something which everyone agrees is real intelligence, merely reveals what
> real intelligence is not.

------
qeternity
I really don't understand why all this pedantic squabbling over terminology.
Nobody has suggested that AlphaGo is AGI or human-level intelligence. But it
is intelligent. Just like in humans or any other organism, intelligence is a
spectrum. Artificial intelligence is no different. I think even the laymen
understands that AlphaGo is not a sentient being inside a computer, just like
people understood the same about DeepBlue.

~~~
flashman
> intelligence is a spectrum

To me, saying algorithms are on the intelligence spectrum is like saying the
weather report is on the humour spectrum. It's true but ultimately
meaningless, because neither algorithms nor the weather report exhibit the
interesting properties of those spectrums.

------
chejazi
OK, so we have AIs that are capable of solving a very specific problem. My
questions is, are AI's capable of being composed? In the sense that if I have
1000 AIs that are each experts at performing their domain specific tasks, can
one or more AIs be built on top of them that specialize almost exclusively in
delegation? i.e. they delegate intelligently depending on the task at hand. If
that's possible, then we could in theory build a generalized AI, right?

~~~
gsam
A simple decision tree could suffice. Is it a speech problem, is it an image
problem? Is it a classification problem? Failing to find an acceptable answer,
it does what seems closest or makes a random but plausible decision. This
doesn't seem too far from what humans would be doing. Part of the problem is
the meta-recognition of the problem and how to organize that within an actual
system.

------
wglb
How old is the debate about what is really AI? A program called _Chess_
written in 1970s by Larry Atkin and David Slate at Northwestern did manage to
win one game from David Levy. See
[https://en.wikipedia.org/wiki/Chess_(Northwestern_University...](https://en.wikipedia.org/wiki/Chess_\(Northwestern_University\))

Atkin often complained about the characterization of their program being AI,
saying (paraphrasing here) "It isn't AI. It is just good software
engineering."

------
itchyjunk
If a chimp could play the game of go at the same level as humans, would we say
the chimp had intelligence? When you base your definition of intelligence on
sample size of 1 (humans), but can't even define within that context, you get
these kinds of problems. Is platypus a mammal or not? Does the classification
change the function? Are all humans intelligent or posses natural
intelligence? Is there a degree of intelligence? i.e. are some humans in
possession of more intelligence than others?

I look at artificial intelligence as an umbrella term which has varying
meaning depending on context. If you classify a task as needing intelligence
to solve and someone finds a piece of code to do it, you have an artificial
intelligence there. You can keep moving the goal post but it's only nuances
and definition.

~~~
mannykannot
There are problems where humans need to use intelligence, but which are
amenable to brute-force computing - chess is an example - and that does not,
in my view, make brute-force computing class as intelligence. It does not
accord with what the term AI originally meant, and the first moving of the
goal post was to apply it to these cases.

------
intrasight
Defining AI is hard - we can agree on that. I think the best definitions are
in the form of tests that match the performance against organisms that have
developed intelligence through millions of years of evolution. The Turning
Test is a good example of this. I think that we can all agree that an AI that
passes the Turing Test is intelligent. But that is the final goal post. What
about intermediate milestones? My opinion is that games just don't count.

I think that creating similar test as milestones would do more to progress the
AI field. Like the Turning Test, it must test that parity with another
intelligent being is achieved. Intelligence is a spectrum from "not very" to
"omniscient". What are some organisms that are closer to the "not very" end of
this spectrum that would be good candidates for parity tests? How would a
parity test be performed?

It is in answering that question that one arrives at “There is no AI without
robotics”. If you want to demonstrate that your artificial cockroach is as
intelligent as the real thing, then you have to build a robot. It doesn't have
to be a physical robot as long as you have a simulator that can do a very
accurate simulations of the physics of the environment, the AI candidate, and
their interactions. That's not so easy, which is why many if not most
researchers build physical robots.

Would a cockroach be a good first milestone? Too complex? How about a worm?
Achieved parity with cockroach? Then move on to Jumping Spiders. When your
great-grandchildren achieve that parity, my great-grandchildren will certainly
celebrate with them.

------
RandyRanderson
IMO there are only 2 problems in machine learning (ML):

1 Regression - Given a set of vector pairs Xi and Yi, find a function that
maps each Xi to Yi minimizing some objective function. 2 Path Finding - Given
some topology (typically on some regression output) find the "best" path,
given some objective function.

 __

All other problems (classification, clustering, etc) can be reposed as a
combination of these two.

Viewed in this way, an "AI" system set on any reasonable test of intelligence
would score super-human (defined against the average adult human) with the
current set of algos, with one caveat.

If we think of operating systems in say the 90s we would mostly agree that few
innovations (evolutions as opposed to revolutions) have occurred when compared
to the OSs we have today. It's the development, integration, testing, tools,
hardware support, etc that have taken all this time.

So, to the article's point, the AlphaGo system does these 2 things pretty well
just as Windows95 did things pretty well. Yet even today I have to restart my
computer every time there is the most trivial of updates.

TL;DR Super human Audio/Visual/NLP ML == AI is both here now and a long way
off.

 __One could probably cut this to one but stating it would be very convoluted.
Also "pathing" problems are typically non-convex and real-time whereas
regression problems are typically not.

------
parenthephobia
One problem with defining what is and isn't AI in the current age is that as
something approaching "real" AI - rather than Hollywood AI - enters the public
consciousness, it is seized upon by marketers wanting to attach the new
buzzword to their products, effectively diluting the term in the process.

Things that a decade ago would have been described as performing regression
analysis, and just last year doing machine learning, may be described this
year as being powered by AI.

------
sgt101
Margaret Boden made these points previously.
[http://www.sussex.ac.uk/profiles/276](http://www.sussex.ac.uk/profiles/276)

A fun alternative strand in AI research driven by these thoughts (I think) is
typified by Rosie. Rosie is the proper job.

[http://soar.eecs.umich.edu/articles/articles/videos-
rosie](http://soar.eecs.umich.edu/articles/articles/videos-rosie)

There is an unseemly comment in this thread that alludes to a genuine issue
with the embodied AI arguement. People with other bodies are not lesser
intelligences than those who have a socially cast "typical" body or sensors.
And given that one has to question this line of reasoning. The answer is that
intelligence is embodied outside and beyond your physical being - it's your
body, your family, your tribe and so on. I find the philosophy difficult, but
to prove that there is something in this view you need to hike out to a wood
somewhere and spend a couple of weeks by yourself. Things happen and who you
are changes (did for me) cut yourself off or be cut off and you are as less
human (autonomous cognitive agent) as one can become.

------
mark_l_watson
While I acccept Alpha Go as 'real AI' (I watched the match with Lee in
realtime, and years ago I wrote and marketed my Go playing program 'Honninbo
Warrior') I agree that Alpha Go is not a general purpose AI.

Deep learning has been personally career/interests changing, but I still think
that we should be building hybrid neural network / symbolic AI systems.

------
saosebastiao
This whole post can be (unintentionally) TL;DRed by a single Wikipedia
article:
[https://en.m.wikipedia.org/wiki/AI_effect](https://en.m.wikipedia.org/wiki/AI_effect)

The phenomenon of people discounting significant advances as "not AI" is not
new. I'm surprised it hasn't gotten old yet.

~~~
lucb1e
> The phenomenon of people discounting significant advances as "not AI" is not
> new. I'm surprised it hasn't gotten old yet.

It hasn't gotten old because people keep calling non-AI AI. I have so far
refrained from correcting people because I see it like the word "hacker",
which on Hacker News is widely understood in its correct meaning, but is
commonly used to refer to computer criminals. I've given up long ago on
correcting people to use the word "crackers" rather than "hackers". But since
it has been brought up...

Artificial Intelligence refers to something that behaves intelligently and can
adapt to new situations. I don't think human brains are magic or divine (I was
surprised to learn this is not a universal opinion), so in that sense we're
all "just a computation". In that sense the simplest pocket calculator is a
thinking machine since it can take an almost infinite number of inputs and
respond to it logically. But everyone with (artificial) intelligence
understands that this is not what is meant by artificial intelligence.

The article that you link claims that every major advance is referred to as
"just a computation", but I doubt that people will claim AI is "just a
computation" once a computer learns to speak and can have a real conversation
at the level of a 3 year old or something.

~~~
EliRivers
_like the word "hacker", which on Hacker News is widely understood in its
correct meaning_

You'd be surprised. Here's part of just such a conversation for a couple of
years ago; _I 've always thought "hacker" without qualifiers meant "person who
can code"_

[https://news.ycombinator.com/item?id=9790316](https://news.ycombinator.com/item?id=9790316)

Other terms commonly used here with wildly different meanings being applied
include "liberal", "feminism" and "capitalism". Screaming matches abound... :)

~~~
lucb1e
I was thinking the same while typing that (that due to being somewhat
mainstream, "Hacker" News might not be understood for what it is). I
personally use the catb.org definition[1], which is quite broad and lists 8
definitions -- if you count the 'deprecated' use of referring to 'computer
criminal', which I do count as one of the popular definitions.

[1]
[http://catb.org/jargon/html/H/hacker.html](http://catb.org/jargon/html/H/hacker.html)

------
paradite
Regarding the definition of AI, I find lecture notes for intro to AI courses
in various university [1] very useful:

> Views of AI fall into four categories:

> Thinking humanly - Cognitive Modeling

> Thinking rationally - “Laws of Thought”

> Acting humanly - Turing Test

> Acting rationally - Rational Agent

> The textbook advocates "acting rationally"

Indeed most of the things we see today in AI community is solving the problem
of "acting rationally" whereas the other categories have their respective
fields (Cognitive Science, Cognitive Neuroscience, Philosophy) that are no
longer closely related to computer science.

[1]
[http://homepage.cs.uiowa.edu/~hzhang/c145/notes/m1-intro-6p....](http://homepage.cs.uiowa.edu/~hzhang/c145/notes/m1-intro-6p.pdf)

~~~
eli_gottlieb
I would say that if your paradigm for studying thought cannot explain away at
least two of the other three, you've got it wrong. A good theory in cognitive
modeling ought to have applications to all the rest. A good theory of acting
rationally ought to have applications to acting humanly or thinking
rationally, and _ideally_ to thinking humanly as well.

------
dbcurtis
AlphaGo is not AI in exactly the same way that Aldebaran robots are not
robots. Seriously, what do those robots do? They are just one step above
Disneyland's pre-programmed animatronics. If this dude wants me to take him
seriously, he needs to show me a robot that solves a problem for somebody. Nao
is just a waste of perfectly good servos. Or, alternatively, if he insists
that Nao is a robot, then I think AlphaGo can be considered AI.

People have been saying "___ is not AI" since Claude Shannon built a robot
mouse to solve a maze. Then Shannon's chess playing program was "not AI".
Humbug.

~~~
unityByFreedom
> People have been saying "___ is not AI" since Claude Shannon built a robot
> mouse to solve a maze. Then Shannon's chess playing program was "not AI".
> Humbug.

AI researchers say this because they don't want the hype to lead investors
putting money in the wrong places, which could cause another AI winter.

OpenAI is one example. Fortunately, that's a non-profit. If it were profit
seeking and they had a large investment that went belly up, it might cause the
industry to tank.

That bust will probably happen again in the future. The "scrooges" are just
trying to forestall it.

------
rocqua
> Culture is the essential catalyst of intelligence

I liked the article on many fronts and thought it tried to justify most of its
claims. However, the quoted claim goes unsupported.

Does anyone know of justifications for the quoted claim?

------
quantum_state
It wud be beneficial to take a pramagtic approach ... AI or not ... as long as
the end product helps human conditions ... we just use ... similar debates
happened so many times in the past ... there is really no point to divert the
attention ... for the people who know, it is clear the title is a click bait
... for the people who don't know .. they find out this anyway after they
click ..

------
inventtheday
Its more like AlphaGo is a PIECE of strong AI. Deep learning, as its
implemented today, is probably not the secret sauce of consciousness, but it
is a powerful and useful subsystem. In that sense, it is a tangible step
towards the eventual goal of general strong AI.

------
happycube
There's an old AI Winter saying: "If it works, it's not AI" ;)

------
S4M
relevant: [https://www.quantamagazine.org/20160329-why-alphago-is-
reall...](https://www.quantamagazine.org/20160329-why-alphago-is-really-such-
a-big-deal/)

------
powerapple
are "we" the result of hardcoded DNA or the result of learning? Our objective
of survival is kind of hardcoded objective we optimize our life on. Why do we
care about if we can build another "intelligence" similar to ours? Why not
just build something good at jobs we don't want to do, and leave "culture" to
us. I strongly believe a higher being will laugh to see we call ourself
intelligent the same way we see artificial intelligence.

------
m3kw9
This is a pointless argument, we all know AI is all about thought. If a person
can't move does he still have intelligence?

~~~
notahacker
Whilst I don't like the author's way of phrasing interaction with real world
information as "fundamentally tied to robotics", humans whose range of
movement is restricted to scrunching up one cheek have won Nobel prizes
because even with their physiological limitations they've still had access to
and assimilated information _vastly_ more complex and unstructured than board
positions for the game of Go in order to be able to devise their own
considerably more complex and varied end goals and means of communication to
reach them. Whilst a computer which can choose hundred of legal Go positions
at each turn might actually have more output channels than Stephen Hawking, it
hasn't had and doesn't have the range of inputs to be in a position to
contemplate alternative end goals such as dictating books on theoretical
physics or ordering a cheeseburger (let alone the ability to learn how to be
understood by sympathetic non-computers enough to overcome its physical
limitations)

One can turn it around: if an organism has no means of assessing or
communicating with the outside world bar the ability to send and receive
electrical signals so simple they can be mapped to the game of Go and whose
survival to the next generation depends on electrical signals combining in
various rare patterns hardcoded into its DNA (which could be mapped to the win
conditions for the game of Go), would we even consider the possibility of such
an organism being intelligent simply because its route to achieving those
patterns was incomprehensible to and could not be deactivated by counter-
stimuli from the humans that studied it? Would such a theoretical organism
even need to be composed of more than one cell?

------
ucaetano
Title: "Why AlphaGo Is Not AI"

Actual argument: "Is [AlphaGo] going to get us to full AI, in the sense of an
artificial general intelligence, or AGI, machine? Not quite"

Conclusion: "the rapid advances of deep learning and the recent success of
this kind of AI at games like Go"

So not only the article discusses something different from the headline, it
directly contradicts the headline by calling AlphaGo a type of AI.

Bad journalism.

------
empath75
In a hundred years, google is going to be arguing with Facebook about whether
humans are intelligent.

------
visarga
> AlphaGo Is Not AI

And the questioning article is not real writing. It fails to the standards of
"real writing" that I uphold. A real article doesn't come with a clickbait
title, and is better documented. I don't think the author knows what he's
talking about - instead, just expressing incredulity based off his gut
feelings.

------
digitalshankar
AlphaGo is an AI, author of this article thinks AI to understand itself and
code itself and have sex then reproduce, WTF? AI is Technology and not
Biology. AI and Human Brain are not the same. This is like saying AI to take
birth by itself without human programming it.

------
dllthomas
Of course it's not _really_ AI; it works.

------
CharlesW
I guess that makes AlphaGo an _AIdiot savant_?

------
ganfortran
BS. It is AI that this AI can only play Go.

------
nullc
Good to know that quadriplegics are not intelligent, much less sentient.

~~~
dang
Please don't make a point this way on HN.

------
PunchTornado
this is one of those clickbaiting articles:

\- taylor swift is not really an artist

\- paulo coehlo is not really a writer

\- javascript is not really programming

\- Trump is not really human

------
suprfnk
Offtopic, but why do people design websites this [1] way? The content isn't
even a fourth of the page width, then the rest of the page is filled with
random articles, newsletter signup and follow buttons.

The topic sounds interesting, but the presentation is incredibly off-putting.

1: [http://i.imgur.com/Obxuj9A.jpg](http://i.imgur.com/Obxuj9A.jpg)

