
Steps Toward Super Intelligence Part I: How We Got Here - nkurz
https://rodneybrooks.com/forai-steps-toward-super-intelligence-i-how-we-got-here/
======
Isinlor
I'm curious why author decided to completely disregard reinforcement learning.

"Note that neural nets are neither. There has been a relatively small amount
of non-mainstream work of getting neural nets to control very simple robots,
mostly in simulation only."

This is just plain false. Playing Arcade Games involves controlling an agent
and there is a lot of widely published, very mainstream work from DeepMind,
OpenAI and many other less known research groups. The big part of hype is
about using deep neural networks in tandem with reinforcement learning. It's a
very reactive system.

In complex environment a work from Nvidia is notable "End to End Learning for
Self-Driving Cars":
[https://arxiv.org/abs/1604.07316](https://arxiv.org/abs/1604.07316)

Also, Alpha Zero is very clever combination of reinforcement learning, neural
networks and symbolic reasoning (tree search). Alpha Zero is not only
reactive, but also deliberative. Interestingly Alpha Zero, in very limited
domain, managed to surpass not only unaided humans, but many years of
programmers effort based on purely symbolic methods by beating Stockfish in
Chess.

The author is no doubt a world class expert in the Good Old AI, but to my
surprise he seems to be out of loops with more recent advancements.

Or am I missing something / misunderstood something?

~~~
AstralStorm
I wouldn't take that "Stockfish paper" announcement at face value.

As far as we could tell, Stockfish was used in a situation it is not optimized
for (very many cores) and wrong settings.

What we can say is that Alpha Zero is comparable after throwing a lot of
specialised compute power at it.

Most importantly, neither system works at a symbolic level or does anything
even close to thinking. They handle statistics and see searches extremely
well, sometimes tactics, but most of it is still hardcoded - they are fed a
high level symbolic representation of the game.

When Alpha Zero can learn playing chess by experimenting with board physics
and observing a few videos or at least reading a book, then we can say it can
actually produce some form of symbolic thought.

(OpenAI is closer but nowhere near close - it falls back on memory of play as
a full state model.)

~~~
Isinlor
There is ongoing open source, community based effort to replicate Alpha Zero
Chess.

The effort is lead by Gary Linscott. He is also one of the main developers of
Stockfish (I believe the framework for testing Stockfish).

[https://github.com/LeelaChessZero/lczero](https://github.com/LeelaChessZero/lczero)

Here is the current state of the effort:
[https://docs.google.com/spreadsheets/d/18UWR4FVhPi0vNwwPreu_...](https://docs.google.com/spreadsheets/d/18UWR4FVhPi0vNwwPreu_avd9ycujGQ5ayR2LzJOWP4s/edit#gid=1045682900)

Estimated date of surpassing Stockfish with settings and hardware reported in
the spreadsheet is the end of this year.

@edit

I have missed your statements about symbolic reasoning. Yes, I'm referring to
the same type of symbolic reasoning as the author of the article. I.e. system
that manipulates high-level symbols without any grounded understanding of them
in the surrounding us reality. Deliberating whether it is thinking or not is
pointless. It's effective and that matters.

------
le-mark
My first instinct was great, another pithy article from a nerd rapture-ist
(riffing on Neal Stephenson here). But this piece is actually a very
informative and interesting. A bit chatty, but very good overview of the
history of AI up to now.

 _I can never get past the structural similarities between the singularity
prediction and the apocalypse of St. John the Divine. This is not the place to
parse it out, but the key thing they have in common is the idea of a rapture,
in which some chosen humans will be taken up and made one with the infinite
while others will be left behind._

~~~
TeMPOraL
The standard "singularity looks too much like rapture" argument.

Although unfortunately now only existing in the Internet Archive, here's a
pretty decent rebuttal of this standard cognitive stopper by Steven Kaas:

[http://web.archive.org/web/20110718031848/http://www.acceler...](http://web.archive.org/web/20110718031848/http://www.acceleratingfuture.com/steven/?p=21)

After pointing out that the similarity is mostly structural (read: accidental,
relevant _at best_ as a heuristic) and listing a set of important differences
between the two ideas, he finally concludes:

> _It’s also interesting to think about what would happen if we applied
> “Rapture of the Nerds” reasoning more widely. Can we ignore nuclear warfare
> because it’s the Armageddon of the Nerds? Can we ignore climate change
> because it’s the Tribulation of the Nerds? Can we ignore modern medicine
> because it’s the Jesus healing miracle of the Nerds? It’s been very common
> throughout history for technology to give us capabilities that were once
> dreamt of only in wishful religious ideologies: consider flight or
> artificial limbs. Why couldn’t it happen for increased intelligence and all
> the many things that would flow from it?_

------
Flenser
His dated predictions about self driving cars, robotics & AI, and space travel
are well worth reading: [http://rodneybrooks.com/my-dated-
predictions/](http://rodneybrooks.com/my-dated-predictions/)

------
arethuza
As someone who used to work on model based reasoning (e.g. for fault diagnosis
in industrial systems) I was rather impressed with the contrary view by Brooks
that _" the world is its own best model--always exactly up to date and
complete in every detail."_

Edit: Similar ideas also appear in _Anathem_ \- although obviously in a
somewhat different context.

------
dncrane
> But these heralds who have volunteered their clairvoyant services to us
> [...] just know that [superintelligence] is going to happen soon, if not
> already. And they do know, with all their hearts, that it is going to be
> bad. Really bad.

This is an unfair characterization of the the most prominent figures in this
area. This article by Allan Dafoe and Stuart Russell from 2016 refutes it:
[https://www.technologyreview.com/s/602776/yes-we-are-
worried...](https://www.technologyreview.com/s/602776/yes-we-are-worried-
about-the-existential-risk-of-artificial-intelligence/)

------
sabertoothed
> God created man in his own image. Man created AI in his own image.

Man created god(s) in his own image. Man created AI in his own image.

------
randcraw
Very cool. Reading Brooks is always worth my time, and a four part review by
him on the current state of AI and how we got here... What fun.

