
Grandmaster Garry Kasparov on Artificial Intelligence - together_us
http://fortune.com/2017/09/25/garry-kasparov-chess-strategy-artificial-intelligence-ai/
======
ilaksh
He was right a few years ago. But now there are several groups who have
developed efficient capable online learning systems that don't require much
data or iteration. When these and other existing types of cutting edge neural
network advances such as techniques for avoiding catastrophic forgetting are
combined with incremental training in diverse environments with general inputs
and outputs, I believe we will see general purpose intelligence.

I believe we will see some demonstrations of AGI in the next two years. At
first they will likely be general but unimpressive and not really as capable
as animals or humans, and so people will dismiss them. But quickly the
capabilities demonstrated will increase and before 2023-2024 there will likely
be consensus that it has been achieved.

Look at systems like this one
[https://github.com/ogmacorp/EOgmaNeo](https://github.com/ogmacorp/EOgmaNeo).
It's a whole other type of NN that Kasparov and others aren't even aware of.

~~~
colorint
Do you believe that artificial intelligence will be capable of deciding, given
an algorithm and a set of inputs, whether the algorithm will finish running?

~~~
jkabrg
Here's a way of thinking about the undecidability of the halting problem.
Let's say you've got a person who's amazing at reading minds, and you bring
someone off the street and tell them they can either have steak or a cupcake
(but not both). You then ask the mindreader to decide if the person will have
the cupcake or the steak. Conceivably, they might be able to figure out which
one the person will have. Now let's say that you add a twist: you walk up to
the person from the street and tell them what the mindreader predicted; in
that case, the mindreader can't succeed because the subject can choose to do
the opposite. That's similar to how the undecidability proof of the halting
problem works.

Now instead of a mindreader, we have a halting oracle, and to make its job
impossible we have a test program that is "made aware" of the halting oracle,
and does the opposite of what the oracle says. Impossible problem for the
oracle. But that then begs the question, how many potential applications of
the halting problem will involve test subjects that actually know what the
halting oracle thinks? How many test subjects even know about the halting
oracle? For instance, how can a program that looks for counterexamples to the
Goldbach conjecture know anything about your halting oracle? In these cases,
the undecidability proof doesn't apply.

So the answer to your question is conceivably yes.

~~~
colorint
Yes, that was Turing's proof. Church's more indirect proof is that it's
impossible to prove the equivalence of two lambda expressions. But the real
essence of the problem is more like, you can't generally know ahead of time
all the values that will be presented inside loop bodies. If you try to
actually elaborate the loop then you get caught again: if the elaboration of
the loop keeps going on for a while, the problem is re-presented, at what
point do you give up? Which is to say, will this program, with these inputs,
run forever? But this is basically Church's proof, since this is also the
question, am I in exactly the same configuration as before? Without an ability
to decide lambda expression equivalence, that question is also hopeless.

Hence the work that gets done in this area constrains the problem down to
situations in which you can know enough to decide halting, like traversal of
lists or trees that are known to be finite. I bring this up because, when it
comes to AI, people want more than this.

------
hal9000xp
Back at the time when Deep Blue won chess match against Kasparov everyone in
the media said about superior intelligence of Deep Blue.

While I at that time clearly realized that IBM just built brute-force
"bulldozer" which can look for 200 million positions per second. Even with
that power it had only a slight advantage over Kasparov who can look at only a
handful of positions per second.

Now, we have another generation of "intelligent" machines based on deep
learning but I see this as just upgraded version of brute-force "bulldozers".
Now, it takes hundreds of millions of samples to infer the rules which human
can infer from only a thousand or even less samples.

So I would call truly intelligent machine which can learn to play chess or go
looking/playing only to a few thousands examples and calculating only a few
moves ahead and not more than a few moves per second. Obviously that machine
would beat human intelligence completely.

Although, such machine still may not have self-consciousness with qualia but
this yet another big challenge.

~~~
a1exyz
Are you sure that human thought isn't basically brute-force bulldozing? Just
because we don't feel like it is doesn't mean it is.

The time it takes us to learn something, the number of times we have to
see/experience it could be akin to bulldozing couldn't it?

There are a lotttt of neurons in our brains that are constantly going off,
perhaps comparable to the amount of transistors in a deep learning gpu if you
account for the training time difference

------
together_us
Also, his interview at Talks@Google with DeepMind’s CEO Demis Hassabis.[0]

[0]:
[https://www.youtube.com/watch?v=zhkTHkIZJEc](https://www.youtube.com/watch?v=zhkTHkIZJEc)

------
pls2halp
He also did a talk at this year’s DEFCON(presumably there’re all for the same
book):[https://youtu.be/fp7Pq7_tHsY](https://youtu.be/fp7Pq7_tHsY)

------
pawn
One thing I'd be interested to learn is, how much of what makes the difference
between an above average chess player and a Master or a Grandmaster can be
tied to better decision making after looking 3 or 5 moves ahead, and how much
is the Master/Grandmaster's ability to look 10+ moves ahead?

~~~
quantdev
The looking 10 or even just 5 moves ahead thing is overstated and this is not
actually how it works most of the time. Most GMs only calculate that far in
the endgame. Before that, often looking 2 or 3 moves ahead is sufficient based
on strategic elements or opening theory (which can't easily be understood by
'looking moves ahead'; they're things like, "this pawn is passed" or "my light
squares will become very weak" which are can be substitutes for looking 30+
moves ahead).

Often positions resemble historic or previous games, so pattern recognition
here and the themes (e.g., "this particular structure will make it easier to
get my rook on the 7th rank at some point") of the old game are important.

In fact, Capablanca, a former World Champion and endgame expert has a famous
quote claiming to only look 1 move ahead.

------
melling
“When I lost our rematch in 1997...”

It has already been two decades. We’re suppose to be three decades from the
singularity. Personally, it doesn’t feel like we’re accelerating towards an AI
that surpasses humans, in general.

------
jasonmaydie
Another nice way of saying the "AI" we bandying about is not truly AI. Just
trained machines to blindly do one thing very well.

