
Garry Kasparov: Why the world should embrace AI [video] - Nikiforos79
http://www.bbc.com/future/story/20170616-garry-kasparov-why-the-world-should-embrace-ai
======
jasode
I haven't read Kasparov's book and the short article doesn't really dive deep
into it so I can only comment on the BBC soundbite...

If people are "pushing back against AI", it's not the progress of technology
they're against -- it's the _economic consequences_. People are worried about
joblessness and no financial security for retirement.

It's similar to saying "embrace outsourcing because you get cheaper products"
or "embrace H1B because America was built on immigrants and their skills".

You can't just speak of those aspirations as general platitudes without being
aware of _what the real worries are_. People aren't xenophobic -- they just
want to keep their livelihoods.

If you don't address the commoner's concerns, you'll be perceived as
disingenuous.

~~~
harryf
> If you don't address the commoner's concerns, you'll be perceived as
> disingenuous.

True but I argue that the current hype about AI replacing jobs has much more
to do with media clickbait than any real issue we should be worrying about
right now.

If your business is selling ads next to cheap content, it's a great story -
scares people into clicks. That to me is a real issue here. How do we prevent
the media pandering scare-content like this?

For those old enough remember the "Y2K bug" that was going to have planes
falling from the sky on the stoke of midnight 2000 there's a great analysis of
the press's role in this at [http://www.flatearthnews.net/chapter-one-bug-ate-
world](http://www.flatearthnews.net/chapter-one-bug-ate-world) ...

> "Encouraged by these stories, some governments had spent fortunes in public
> money (and secured no better result than those who spent next to nothing).
> Journalists reported that the British government had spent £396 million on
> Y2K protection. They also reported that it had spent £430 million. And that
> it had spent £788 million. The American government had spent far more, they
> said - $100 billion, or $200 billion, or $320 billion, or $600 billion, or
> $858 billion, depending on which journalist you were reading. Anyway, it was
> a lot. Beyond that, the private sector had spawned a mini-industry of
> companies selling millennium bug kits, while publishers turned out bug books
> and bug videos, and estate agents sold bug-resistant homes, and a few
> families sold their houses and fled to remote cabins in order to give
> themselves a chance to survive the coming bug-related chaos. But this was
> not a story."

> "The sun rose on January 1 2000 like the lights coming on at an orgy.
> Everybody who had been so busy - the journalists, the governments, the bug-
> related businesses and the computer experts - all picked themselves up,
> hoped nobody was looking and quietly tip-toed away."

~~~
florianletsch
> (...) the current hype about AI replacing jobs has much more to do with
> media clickbait than any real issue we should be worrying about right now.

Hype? For sure. But the current advances in autonomous vehicles will have a
very real impact on the working life of the common truck driver (3.5 million
in US alone), I think we all agree on that.

~~~
credit_guy
I for one don't agree. If the autonomous trucks displace the truck drivers
over the span of 50 years, I don't see the problems. Very few current drivers
will be in the workforce 50 years from now.

------
opportune
More pop AI garbage. As a rule, pretty much any article about AI from someone
outside of the field of AI or CS in general is going to full of these vacuous,
overly broad arguments. Stephen Hawking is guilty of this too. Basically any
mention of "AI" and "dystopian" in the same sentence is a red flag for this
kind of stuff. I wish that as a website we could just ignore these articles.

No, BBC. Gary Kasparov is not an AI authority. Please don't treat him like
one.

~~~
jimmytucson
You shouldn't judge an article based on who wrote it, whether it's Garry
Kasparov or someone "in the field" of AI. Sure, people are more likely to
listen to Kasparov because he's an intellectual megalodon. But by dismissing
anything not written by an "expert", you're making the exact same mistake.

~~~
fao_
> But by dismissing anything not written by an "expert", you're making the
> exact same mistake.

Uh, no. You're simply favouring people who have dedicated their lives to
studying and researching something, over people who had a spare month and
thought "You know what? I'd love to write an article over something I know
absolutely nothing about".

I love to read about amateurs and autodidacts as much as the next person on
HN, but the simple truth is that most people talking about a field, who have
not majored in it, are not going to be either of those people. Autodidacts who
put the effort in are rare these days, what is significantly less rare are
people who write articles about a subject, who haven't read even the basic
texts on said subject.

This is true even for specialists who are majored in another field, such as:
Stephen Hawking, Stephen Wolfram, Elon Musk (Although I admit it is dubious to
think of a CEO as a specialist in this sense), and now Garry Kasparov.
Rationalwiki has a list dedicated to people who are Nobel prize winners in on
field, talking absolute bullshit about another field:
[http://rationalwiki.org/wiki/Nobel_disease](http://rationalwiki.org/wiki/Nobel_disease)
(That is, bullshit in the sense of nonsense, not in the insult sense).

Actually, I would probably go so far as to say that this attitude is strictly
_against_ autodidacticism, because where autodidacts favour knowledge about a
subject, the other favours arguments from authority and complacence of
thought.

~~~
ZenoArrow
> "Uh, no. You're simply favouring people who have dedicated their lives to
> studying and researching something, over people who had a spare month and
> thought "You know what? I'd love to write an article over something I know
> absolutely nothing about"."

Do you not think the people that have dedicated their lives to studying AI are
likely to be more optimistic about its uses than the population at large?

I dislike the 'opinions as news' trend on the whole, but there is room in
public debates for non-experts in a specific field. The knowledge required to
develop the technology is different from the knowledge needed to be able to
engage in debates about its uses.

AI has the power to take jobs away from humans. This is not a controversial
view, mainly because it's already happened (such as in car manufacturing). In
my opinion, there's not enough debate about the implications of the increasing
sophistication of AI (not enough debate amongst the general population,
there's plenty in tech circles).

------
EternalData
I saw Garry speak in San Francisco -- I'm convinced he's an intellectual
giant, but he was very candid in admitting that for all of his trained
knowledge in chess, once you take that to an adjacent challenge (the game of
Go), he struggles with the basics. I think the same logic should apply with
his understanding of AI.

I do think there is a general phenomenon of "smart people outside of AI saying
dumb things about AI" \-- I'm guessing it's because of the massive
implications artificial intelligence is going to have on every part of
society.

------
petters
A few years ago, he wrote an article explaining that computers could never be
good at poker, because that required bluffing, a skill exclusive to humans.

~~~
colmvp
I wonder, couldn't one program a computer to record video and audio of the
players you are up against, provide some probabilities of hand strength based
on stance, mannerisms, facial expressions, and voice, and add that information
into the mix with a betting AI?

~~~
bko
That's not even necessary. Using just logic, probabilities and computation,
research scientists have been able to beat very strong poker players head to
head.

[http://www.wired.com/2017/02/libratus](http://www.wired.com/2017/02/libratus)

------
grondilu
"[...] machines have algorithms, and they're getting better and better, but
machines have no curiosity, no passion, and most importantly, machines don't
have purpose."

IMHO the greatest uncertainty regarding AI, and in that sense the greatest
risk, is what happens precisely when AI becomes so good that it can be made to
have all those features. Intelligence probably is the toughest part of a more
global objective that is to emulate whatever the human body and brain can do.

A world where machines can do whatever the human body does is vastly different
from ours. It's hard to even imagine it, even though some authors are trying
[1]. And some of the possibilities include the end of our biological lineage.

1\. [http://ageofem.com/](http://ageofem.com/)

~~~
Santosh83
It will probably come from the opposite direction, i.e., integrating more and
more cybernetics into our bodies until at some point we're essentially only
our brains (biologically). Whether something non-biological can show emergent
properties that the brain has remains to be seen.

In any case even AI completely indistinguishable from ourselves is likely only
to be as much of a threat as our children are. Everything depends on how it is
managed. Apocalptic scenarios generally can only happen in a tightly
connected, online society with no offline fallback mechanisms for control. If
we are that careless/stupid, then perhaps AI taking over our systems (and
hence us) is not so bad after all...?

AI needs creative intelligence, ability to learn and adapt the very process by
which it learns, as well as links to the physical world enabling it to control
and perpetuate its existence. The entire process is in our (human) control.
Until the very end.

The other direction is frankly more scary. Augmented human beings and other
existing living beings. That is much more prone to go haywire and pose a
threat.

~~~
grondilu
I think both directions are basically going to the same destination. I believe
we should worry more about the "artificial human" than about "artificial
intelligence". To get to the artificial human there are several paths,
biological (with genetic engineering), cybernetics (gradual replacement of
body parts with non-organic artefacts), robotics (non-organic from the start)
and software (brain emulation).

I believe all those paths eventually lead to the same kind of difficulties in
predicting what kind of world they would bring.

------
kushti
Interestingly, Kasparov is considered as a great intellectual in the West,
while in Russian-speaking world he is known for support of some utterly
controversial theories, e.g. Nosovsky-Fomenko pseudo-historical bunk (see, for
example,
[https://www.youtube.com/watch?v=4Thfip4Owz8](https://www.youtube.com/watch?v=4Thfip4Owz8)
[RUS]).

~~~
wruza
There is a notion that playing chess makes you... better chess player, and
that's all.

------
peculiarbird
As one of the first humans to have their job outperformed by a computer I feel
Gary has taken it well in stride. I suspect the rest won't be so gracious.

~~~
icc97
It actually sounded like he was still hurt by it, he was very derogatory about
Deepblue, it had no intelligence, just cruching numbers etc. He was finding
other faults with it except for the bit that beat him.

The quotes I've heard coming out about Alpha Go (from Go players) seem to be
more repectful - that it's actually teaching professional go players new ways
of playing.

~~~
annnnd
Of course Go players probably saw it coming. Also, Google knew that he needed
to leave good impression, not just beat the best human player.

------
forgot-my-pw
He recently did a Talks at Google with the CEO of Deepmind:
[https://www.youtube.com/watch?v=zhkTHkIZJEc](https://www.youtube.com/watch?v=zhkTHkIZJEc)

It mostly covers his experiences with Deep Blue. Wish it was a longer talk. He
generally has pretty positive view on computer and AI.

One interesting bit is when Demis Hassabis questioned if a chess-trained
AlphaGo can be stronger at chess against Stockfish. Would be fun if they
really try it.

PS: Hassabis once reached master standard at the age of 13 with an Elo rating
of 2300 (at the time the second highest rated player in the world Under-14
after Judit Polgár who had a rating of 2335) and captained many of the England
junior chess teams.

------
cJ0th
When they list the fears they forgot an important one: discrimination.

Of course, discrimination has always been a problem but I'd like to believe we
see some progress in "classical areas" (for instance sexism). The problem with
AI is that it is always pigeonholing. As a result myriads of new classes of
minorities which are so small that they neither have a name nor a voice will
emerge. For instance, you get rated for creditworthiness and somehow you're
not typical re: attributes x,y and z. You get a bad rating but you can't
really complain as the algorithm probably didn't take things like sex or race
(directly) into account. However, it could be that your FB posts are enjoying
above-average amounts of likes from people with low education and this may
raise red flags etc...

~~~
bitL
Imagine when AI undoes all positive discrimination, like for minorities, women
etc. because it would try to optimize for results, i.e. bringing its own type
of meritocracy, going after biggest gains without political affection
(survival of the fittest/strongest).

~~~
igk
How do you measure those results? Especially in fuzzy fields where they very
much influenced by "political affection"

------
ivanb
Why would anyone listen to him? He doesn't even have neither economical nor
technical education. From what I know about him, he just knows how to play one
game really well. Basically a sportsman. Why would I listen to a sportsman on
an economical topic that is greatly affecting society?

~~~
bauerd
Because you should judge arguments, not their proponents.

~~~
ivanb
When arguments come from a man who has no experience with AI other than being
beaten by a chess program I have full right to ignore the arguments.

------
HONEST_ANNIE
Kasparov says that machines don't have purpose, passion or curiosity. This is
typical expression of meat machine privilege.

The purpose of a system is what it does (POSIWID). Feeling purposeful and not
feeling purpose is just utility function giving feedback.

Is passion anything else than utility function configured so that the machine
does not wander aimlessly and idle? If the machine has lots of freedom to
choose its actions and it chooses to avoid 'side quests' and prefers one issue
over all others, does it have passion?

How about curiosity? Consider phenomenon that is hard to predict and the
machine can't recognize it. Is it curious if it has been programmed to
investigate, poke around and learn. When boosting algorithm increases the
weight for incorrectly classified instances, isn't that a primitive form of
curiosity?

Kasparov also says that AI does not make us obsolete. I assume that obsolete
means that if meat unit is removed from it's environment, others (society)
don't miss it's contribution. What does 'us' mean? I interpret that as 'AI
does not make all of use obsolete.' This is most likely true since for
foreseeable future. If not for any other reason than comparative advantage.
The oversupply of meat machines has already made many of them undesirable
(overpopulation).

