

The Believers: The story behind deep learning - mjohn
https://chronicle.com/article/article-content/190147/

======
SlipperySlope
Hinton gave a well attended keynote at the recently concluded AAAI 2015
conference in Austin, in which one of his concluding slides simply stated in a
headline font ...

"GOFAI is finished"

I'll explain the irony of this dramatic moment. The AAAI is the Association of
the Advancement of Artificial Intelligence. Its annual conference is the
worldwide gathering of university graduate students, and a few corporate labs,
presenting their research in the disparate niche areas that have something to
do with smart or human like behavior. All these niche areas have their own
annual conferences,... Robotics, Planning, Semantic Web, Knowledge
Representation, Natural Language Processing, pattern recognition, speech
processing, theorem proving, and so forth.

GOFAI is "Good Old Fashioned Artificial Intelligence", and is used to
represent the line of mostly symbolic AI research made popular by John
McCarthy, Marvin Minsky, and others back in the 1960-1970 era. It was those
folks who as peer reviewers made Hinton's life awful. The NIPS conferences
over the years became the home for connectionist researchers, as opposed to
the AAAI.

Now the wheel has turned full about. The triumph of Hinton's approach allows
him to tell thousands of young researchers that their own research approaches
are finished unless they incorporate deep learning.

~~~
balls_deep
It's sad to think that the NN researchers will overreach and suppress other
research approaches now that they are having a (well deserved) victory lap in
the popular press. This is the typical pattern in academia, however. What is a
good antidote for this? Although I work in another field, I've definitely felt
the sting of a bad review for work that didn't conform to the fashions of the
day. It is very unjust. Hopefully Hinton and his celebrated cohort will show
restraint and give the GOFAI community a break when they are down. I wouldn't
be surprised to see that NN(deep learning) will encounter barriers to progress
in the future and a variation on GOFAI will prove to be the way forward..

~~~
SlipperySlope
DARPA, the USA sponsor of much AI research is quite balanced, taking risks on
many different approaches to challenge problems. Robotics especially, as
grounded as it is in physical reality, employs a hierarchy of methods. Chiefly
sub-symbolic at lower layers for sensors, e.g. voice recognition and visual
object recognition, and symbolic in nature for higher level cognitive
functions, e.g. route planning.

I am glad that the AAAI accommodates all the disparate separate research
fields that make up AI. With thousands of papers presented, those I enjoyed
most aside from the keynotes, were those given by chairs of the specialty
conferences describing what's happening in their field. Awesome things are
going on - propelled by Moore's law and the contributions from every
continent.

------
reneobolensky
I think the article is a tiny bit misleading. The main reason for the
perceived lack of enthusiasm towards deep learning in neuroscience stems from
the fact that such multilayer networks are not plausible models of neural
architecture. In a way they are not unlike chess computers, which surpass
human performance multiple times yet offer little to no insight into how a
human plays chess. As data analysis tools, however, they are quite brilliant.

~~~
eli_gottlieb
Ok, so what _are_ the neuroscientifically, biologically plausible models of
how learning and reasoning happen in the brain?

------
ctl
This is astonishingly high quality for pop science. It deserves more upvotes!

