
Why Robot Brains Need Symbols - mindgam3
http://nautil.us/issue/67/reboot/why-robot-brains-need-symbols
======
mindgam3
The author, Gary Marcus, makes a cogent argument for why we need both deep
learning and symbol manipulation to build "true" AI. what I found convincing
was the argument that deep learning models currently can't even generalize to
novel instances outside the training space _in the field of object
recognition_ , which is their strong suit. Why would any rational person
believe that they have the capability to generalize to novel instances for
other types of problems outside of perceptual classification?

Honestly as a native speaker of three symbolic languages (english, french, and
chess) the need for symbolic manipuation aka approaches from classical AI
seems fairly intuitive, if not downright obvious. I don't understand why Yann
LeCun and other ML bigwigs find this controversial in the slightest.

Perhaps someone with more practical experience building AI could enlighten me
on this subject.

~~~
p1esk
_I don 't understand why Yann LeCun and other ML bigwigs find this
controversial_

The problem is not with the idea of combining symbolic AI and DL, it's with
people like Marcus who seem to be all talk, without any results to show. He's
been talking about this for a while now, I think it's time for a demo.

~~~
mindgam3
Fair point. However one could argue that symbolic AI is a harder problem,
perhaps awaiting a theoretical breakthrough. And that the pace of progress in
this area has slowed at least in part due to ML hype sucking all the air out
of the room (along with the best talent).

~~~
p1esk
_along with the best talent_

"Best talent" can decide for themselves what they should work on. As long as
they produce results, no one cares how they do it.

