
Explainability and Deep Learning - deeplstm
https://medium.com/@tilsml500/explainability-and-deep-learning-1a77c622aafc
======
phonicwheel
Good article, but there is a frequent misconception of Symbolic AI as "manual
creation of lots of rules". This was true for early approaches, such as expert
systems in the 70s/80s. Symbolic AI just means, well, AI with symbols, and
there are many approaches (e.g., in neuro-symbolic AI or using probabilistic
inductive logic programming) where symbolic representations are emergent /
learned from data using machine learning approaches and can be
uncertain/probabilistic.

In the linked talk "From System 1 Deep Learning to System 2 Deep Learning" by
Yoshua Bengio, the speaker first criticizes Symbolic AI only to re-invent
concepts from Symbolic AI later (e.g., "high level semantic variables",
"shared 'rules' across arguments"), which is rather silly given that some
Symbolic AI approaches are well capable of learning symbols, rules etc bottom-
up - which is not fundamentally different from learning low-dimensional vector
representations or "generalizations" in linguistics.

~~~
deeplstm
Thanks for the reply. I agree there might be a lot of different approach to to
symbolic AI. Probabilistic inductive logic programming sounds interesting but
I am not familiar with it. Maybe I should check it out later. I am wondering
if it has capability to learning from data.

