
Ask HN: Will there ever be a resurgence of interest in symbolic AI? - snazz
Symbolic AI fell by the wayside at the beginning of the AI winter. More recently, with powerful GPUs making ML and other statistical AI approaches feasible, symbolic AI has not seen anywhere near as much investment.<p>There are still companies I know of that do symbolic AI (such as https:&#x2F;&#x2F;www.cyc.com), but I very rarely hear of new research in the field.
======
brundolf
Employee of Cycorp here. Aside from the current ML hype-train (and the
complementary unfashionability of symbolic AI), I think the reason symbolic AI
doesn't get as much attention is that it's much more "manual" in a lot of
ways. You get more intelligent results, but that's because more conscious
human thought went into building the system. As opposed to ML, where you can
pretty much just throw data at it (and today's internet companies have _a lot_
of data). Scaling such a system is obviously a major challenge. Currently we
support loading "flat data" from DBs into Cyc - the general concepts are hand-
crafted and then specific instances are drawn from large databases - and we
hope that one day our natural language efforts will enable Cyc to assimilate
new, more multifaceted information from the web on its own, but that's still a
ways off.

I (and my company) believe in a hybrid approach; it will never be a good idea
to use symbolic AI for getting structured data from speech audio or raw
images, for example. But once you have those sentences, or those lists of
objects, symbolic AI can do a better job of reasoning about them. Pairing ML
and symbolics, they can cover each other's weaknesses.

~~~
voldacar
What kind of experiments have you guys done that combine symbolic and
statistical/ML methods? It sounds like an area ripe for research

~~~
brundolf
I know we use ML to "grease the wheels" of inference; i.e., Cyc gains an
intuition about what kinds of paths of reasoning to follow when searching for
conclusions. I don't know of any higher-level hybridization experiments; I
think we only have one ML person on staff and mostly our commercial efforts
focus on accentuating what we can do that ML can't, so we haven't had the
chance to do many projects where we combine the two as equals.

~~~
brundolf
To clarify the above:

"Cyc gains an intuition about what kinds of paths of reasoning to follow when
searching for conclusions"

The _possible_ paths come purely from symbolics. But that creates a massive
tree of possibilities to explore, so ML is used simply to _prioritize_ among
those subtrees.

~~~
Iv
Basically you are learning the heuristic? Do you have any public information
on that? That something I have always wanted to work on and really think it
could be a shortcut to AGI...

------
ssivark
I'm not an expert on this, but here's my current understanding:

Symbolic reasoning/AI is fantastic when you have the right concepts/words to
describe a domain. Often, the hard ("intelligent") work of understanding a
domain and distilling its concepts need to be done by humans. Once this is
done, it should in principle be feasible to load this "DSL" into a symbolic
reasoning system, to automate the process of deduction.

The challenge is, what happens when you don't have an appropriate distillation
of a complex situation? In the late eighties and early nineties, Rodney Brooks
and others [1] wrote a series of papers [2] pointing out how symbols (and the
definiteness they entail) struggle with modeling the real world. There are
some claimed relations to Heideggerian philosophy, but I don't grok that yet.
The essential claim is that intelligence needs to be situated (in the
particular domain) rather than symbolic (in an abstract domain). The "behavior
driven" approach to robotics is stems from that cauldron.

[1]: Authors I'm aware of include Philip Agre, David Chapman, Pattie Maes, and
Lucy Suchman.

[2]: For a sampling, see the following papers and related references:
"Intelligence without reason", "Intelligence without representation",
"Elephants don't play chess".

~~~
mindcrime
_The essential claim is that intelligence needs to be situated (in the
particular domain) rather than symbolic (in an abstract domain)._

I think there is something (a lot) to this. Consider how much of our learning
is experiential, and would be hard to put into a purely abstract symbol
manipulating system. Taking "falling down" for example. We (past a certain
age) _know_ what it means to "fall", because we _have_ fallen. We understand
the idea of slipping, losing your balance, stumbling, and falling due to the
pull of gravity. We know it hurts (at least potentially), we know that skinned
elbows, knees, palms, etc. are a likely consequence, etc. And that
experiential learning informs our use of the term "fall" in metaphors and
analogies we use in other domains ("the market fell 200 points today, on news
from China...") and so on.

This is one reason I like to make a distinction between "human level"
intelligence and "human like" intelligence. Human level intelligence is, to my
way of thinking, easier to achieve, and has arguably already been achieved
depending on how you define intelligence. But human like intelligence, that
features that understanding of the natural world, some of what we call "common
sense", etc., seems like it would be very hard to achieve without an
intelligence that experiences the world like we do.

Anyway, I'm probably way off on a tangent here, since I'm really talking about
embodiment, which is related to, but not exact the same as, situated-ness. But
that quote reminded me of this line of thinking for whatever reason.

~~~
maxirater
I think the opposite is true. Humans think in terms of symbols to model the
world around him. A child is born knowing nothing, a completely blank slate,
and slowly he learns about his surroundings. He discovers he needs food, he
needs to be protected and cared for. He discovers he doesnt like pain. If you
talk to a 3 year old child you can have a fairly intelligent conversation
about his parents, about his sense of security because this child has built a
mental model of the world as a result of being trained by his parents. This
kind of training requires context and crossreferencing of information which
can only be done by inferencing. You cant train a child by flashing 10,000
pictures at him because pictures are not experience, even adults can be fooled
by pictures which are only a 2D representation of 3D concepts of 3D space. So
all these experiences that a small child has of knowing about the world come
to him symbolically, these symbols model the world and give even a small child
the ability to reason about external things and classify them. This is human
level intelligence.

Human like intelligence is training a computer to recognize pixel patterns in
images so it can make rules and inferences about what these images mean. This
is human like intelligence as the resulting program can accomplish human like
tasks of recognition without the need for context on what these images might
mean. But there is no context involved about any kind of world, this is pure
statistical training.

~~~
woodandsteel
> Humans think in terms of symbols to model the world around him. A child is
> born knowing nothing, a completely blank slate, and slowly he learns about
> his surroundings.

Actually, the research has found that new born infants can perceive all sorts
of things, like human faces and emotional communication. There is also a lot
of inborn knowledge about social interactions and causality. The embodied
cognition idea is looking at how we experience all that.

By the way, Kant demonstrated a couple of centuries ago that the blank slate
idea was unworkable.

~~~
maxirater
>Actually, the research has found that new born infants can perceive all sorts
of things, like human faces and emotional communication.

yes, thats called sensory input.... a child deprived of sensory input when
newborn can die because there is nothing there to show the baby of its
existence, this the cause of crib death (notice that crib death is not called
arm death because a baby doesnt die in the mothers arms)

>There is also a lot of inborn knowledge about social interactions and
causality.

no, babies are not born with any knowledge at all of even the existence of
society or beings. causality is learned from the result of human experience,
causality is not known at birth

~~~
Retra
There's no reason to think the human brain learns things using purely
statistical methods, and then turn around and try to argue that evolution
cannot encode the same information into the structure of a baby using those
exact same methods. Humans have lots of instinctual knowledge; geometry,
facial recognition; kinesthetics, emotional processing; affinity for symbolic
language and culture, just to name a few. What we don't have is knowledge of
the specific details needed for socialization and survival.

------
webmaven
Hybrid approaches have been getting some interesting results lately[0], and
will probably continue to do so, but the approaches between statistical and
symbolic AI are so different that these are essentially cross-disciplinary
collaborations (and each hybrid system I've seen is essentially a one-off that
occupies a unique local maxima).

I suspect that eventually there will be an "ImageNet Moment" of sorts starring
a statistical/symbolic hybrid system and we'll see an explosion of interest in
a family of architectures (but it hasn't happened yet).

[0] [http://news.mit.edu/2019/teaching-machines-to-reason-
about-w...](http://news.mit.edu/2019/teaching-machines-to-reason-about-what-
they-see-0402)

------
xvilka
Well, symbolic AI people also work on probabilistic reasoning. The production
level example is ProbLog[1][2], used in genetics. There is even
DeepProbLog[3], adding deep learning in the mix. The only problem both
implemented in Python. I hope there will be alternatives in native languages.
Scryer Prolog[4] might become this implementation one day (it is written in
Rust). Another approach is to extend vanilla Prolog, like cplint[5] does.

[1]
[https://dtai.cs.kuleuven.be/problog/](https://dtai.cs.kuleuven.be/problog/)

[2]
[https://bitbucket.org/problog/problog](https://bitbucket.org/problog/problog)

[3]
[https://bitbucket.org/problog/deepproblog](https://bitbucket.org/problog/deepproblog)

[4] [https://github.com/mthom/scryer-prolog](https://github.com/mthom/scryer-
prolog)

[5] [https://github.com/friguzzi/cplint](https://github.com/friguzzi/cplint)

------
maxander
Symbolic AI and ML/DL AI are two entirely different technologies with
different capabilities and applications, that both happen to be called "AI"
for mostly cultural reasons. The success of one is probably unrelated to the
success or failure of another. In most ways, symbolic AI has "faded" simply in
that we now take most of its capabilities for granted; e.g., it never strikes
you as odd that Google Maps can use your phone CPU to instantly plot a course
for a cross-country roadtrip if you so desire, but that sort of thing was a
major research project way back when.

In contrast, ML/DL AI is still shiny and new and we have a much less clear
grasp of what its ultimate capabilities are, which makes it a ripe target for
research.

~~~
hadsed
Very much agree. To expand on this, check out Stuart Russell and Peter Norvigs
intro book on AI. It supports your comment as there's an entire section
(chapter?) on path planning and the like.

------
mark_l_watson
I expect hybrid deep learning and symbolic AI systems to be highly relevant.
My background which is what I base the following opinions on: I spent the
1980s mostly doing symbolic AI except for 2 years of neural networks (wrote
first version of SAIC Ansim neural network library, supplied the code for a
bomb detector we did for the FAA, on DARPA neural net advisory panel for a
year). For the last 6 years, just about 100% all-in working with deep
learning.

My strong hunch is that deep learning results will continue to be very
impressive and that with improved tooling basic applications of deep learning
will become fairly much automated, so the millions of people training to be
deep learning practitioners may have short careers; there will always be room
for the top researchers but I expect model architecture search, even faster
hardware, and AIs to build models (AdaNet, etc.) will replace what is now a
lot of manual effort.

For hybrid systems, I have implemented enough code in Racket Scheme to run
pre-trained Keras dense models (code in my public github repos) and for a new
side project I am using BERT, etc. pre-trained models wrapped with a REST
interface, and my application code in Common Lisp has wrappers to make the
REST calls so I am treating each deep learning model as a callable function.

~~~
p1esk
_millions of people training to be deep learning practitioners may have short
careers_

Have database admins disappeared? How about front end devs?

~~~
ElFitz
I don't know if database admins have disappeared. But we have never needed one
to take care of our DynamoDB tables and our "serverless" Aurora databases.

Even though I'm pretty sure AWS needs a lot of them; although not one (or
more) for each and every single one of their customers.

~~~
oneplane
I sure your queries are horrible :p DBA's are generally not just scoped to
keeping the RDBMS alive.

~~~
ElFitz
Haha ^^ My SQL skills are indeed quite limited. But most of my stuff relies on
DynamoDB, mostly in a strictly key-value fashion.

Still; queries are a lot less to learn and much easier to fix when I do them
wrong then also queries + taking care of (maintaining, scaling, fixing,
backing up,...) the database itself.

------
Darmani
A lot of what's been going on in the PL community would have been called
"symbolic AI" in the 80's. Program synthesis, symbolic execution, test
generation, many forms of verification --- all involving some kind of SAT or
constraint-solving.

~~~
zoba
What is the PL community?

~~~
gjstein
Programming Languages

------
wittedhaddock
Please check out the Genesis Group at MIT's CSAIL. Or Patrick Winston's Strong
Story Hypothesis. Or Bob Berwick. Many at MIT are still working through the
80's winter, without the confirmation bias of Minsky and Papert's Perceptrons
with all the computation power and none of the theory (now called neural
nets). Or any of the papers here:
[https://courses.csail.mit.edu/6.803/schedule.html](https://courses.csail.mit.edu/6.803/schedule.html)

Or the work of Paul Werbos, the inventor of backpropagation, was heavily
influenced by -- though itself perhaps outside the cannon of -- strictly
symbolic approaches

------
PaulHoule
Let's see.

Databases. (Isn't Terry Winograd's SHDRLU conversation that kind of
conversation that you have with the SQL monitor?) Compilers. (e.g. programming
languages use theories developed to understnad human languages) Business Rules
Engines. SAT/SMT Solvers. Theorem proving.

There is sorta this unfair thing that once something becomes possible and
practical it isn't called A.I. anymore.

~~~
Iv
Well when we apply materials developed for space flight to kitchen equipment,
it stops being called space tech.

Space tech is when you try to go to space. AI is when you aim at AGI.

------
Animats
The big win in symbolic AI has been in theorem proving. In spaces which do
have a formal structure underneath, that works well. In the real world, not so
much.

------
YeGoblynQueenne
>> There are still companies I know of that do symbolic AI (such as
[https://www.cyc.com](https://www.cyc.com)), but I very rarely hear of new
research in the field.

I can't answer your main question but, as a practical matter, if you don't
hear of new research in the field it probably means you're not tuned in to the
right channels, which is to say, the proceedings of the main AI conferences
that cover a broad range of subjects: AAAI, IJCAI, IROS, ICRA, plus the more
specialised ones, like AAMAS, ICAPS, UAI, KR, JELIA, etc. Any interestting
research in symbolic AI is going to be there.

If you get your news from the tech press and the internet, you won't hear of
any of that stuff and won't even know it's going on, because, let's face it-
the large tech companies are championing a very specific kind of AI
(statistical machine learning with deep neural nets) and, well, they have the
airwaves, they have the hype engines and their noise is drowning out all other
information on the same channels.

For the record, work on symbolic AI is still going on. For instance, the
subject area of my PhD research is in Inductive Logic Programming, a branch of
symbolic, logic-based machine learning. This is not just active, but going
strong with a recent explosion on research in learning Answer Set Programs and
the work of my group, on a new ILP technique, Meta-Interpreterive Learning. If
we're inventing new stuff, we're still alive and well.

------
nostrademons
I think the biggest problem with symbolic AI systems is that you can only
program them with "facts" that have bubbled up into consciousness. Most of
human behavior is unconscious. Statistical AI tends to observe what people
_do_ instead, which is a better representation of how they will actually
behave in the real world. Statistical AI trained on self-reported data (i.e.
asking people what they think instead of observing what they do) has many of
the same problems as symbolic AI.

This is also the biggest weakness of statistical AI, and why so many people
are mad at companies that employ it. If you train on what people do, you also
capture all the behavior that people wish they didn't do. Thus you get all the
racism, sexual fetishes, discord, unpopular views, irrational views,
tribalism, and general stupidity that folks would prefer to pretend doesn't
exist, but shows up all the time to an objective observer of humanity.

------
acapybara
Neural networks have been around for a much longer time than they have been
popular/practical/commercially-viable. It just so happens that they can be
accelerated using dedicated floating point computing hardware --something GPUs
are very good at.

I often think about symbolic AI, and how it relates to Boolean satisfiability.
This is an integer problem. We don't seem to have a similar technology to GPUs
that would be transferable to the problem domain. If we had that, maybe things
would be different. I looked into this a bit, and Microsoft seems to have put
some resources into a SAT computing ASIC.

To get the same kind of progress in symbolic AI, perhaps we need massively
parallel/scalable SAT solving hardware. The gaming industry gave us the
initial floating point hardware; maybe the cryptocurrency industry will gift
us with analogous integer hardware that could push symbolic AI further.

------
wwarner
I keep waiting for symbolic AI to really get traction in the area of software
verification. Either a great language/ compiler that can really prove it has
no bugs, or a testing approach that takes a binary and proves that it has no
bugs of a certain kind. What we've been getting, though, is evidence that the
space of possibilities quickly outpaces our capacity to search it, and so
logical methods don't compare well to statistical approaches. I would never
close the door on it though, since programming is basically thinking
symbolically. It's possible that as the industry gets more concerned with
privacy and security that the class of applications that _need_ this kind of
provability will drive adoption at the current state of the art.

~~~
psoy
Ever heard of the halting problem?

~~~
rileymat2
The halting problem only says that a program cannot decide every possible
program, it does not say much about the programs you are likely to write day
to day for the vast majority of programmers.

------
throwawaysymbAI
What do you guys think of the conceptual spaces approach of Peter Gärdenfors?

See eg here:

[https://mitpress.mit.edu/contributors/peter-
gardenfors](https://mitpress.mit.edu/contributors/peter-gardenfors)

[https://www.youtube.com/watch?v=Y3_zlm9DrYk](https://www.youtube.com/watch?v=Y3_zlm9DrYk)

From reading some papers, it seems his approach is a third way beyond symbolic
and connectionist.

Indeed the title of that lecture is "The Geometry of Thinking: Comparing
Conceptual Spaces to Symbolic and Connectionist Representations of
Information"

Would you say conceptual spaces is a third way and how does it apply to the
topic discussed in this thread?

~~~
TuringTest
I'd say it's symbolic, but not combinatorial. Tldr: it looks a lot closer to
symbolic than to connectionist, but it seems a promising new approach within
symbolic methods.

What we call symbolic AI usually makes inference by exploring the space of
possibilities generated by recombining the basic symbols of a (fixed) domain
language.

Gärdenfors approach has a lot of this, in that it has a symbolic
representation of data, a well-defined set of symbols that stand for objects
in the observed domain (animals, in the example given); additionally, each
symbol has a numerical value which represents how much of each property the
object possesses.

This is somewhat similar to the knowledge systems of the 70s and 80s for
incomplete, approximate rule-bad reasoning. But those were problematic because
it was very difficult to do reasoning with their numerical values. When
combining facts within the database, the respective combinations of their
numerical values often had nonsensical meanings. The algebras used in those
systems were not a good fit.

If Gärdenfors is right and concepts can be treated as mathematical spaces with
convex regions, his approach could solve some major problems of those systems
that made them impractical, and maybe bring them to prominence again.

------
snrji
People tend to do a hard distinction between symbolic AI and machine learning,
but actually some machine learning algorithms are based on symbols (eg.
decision trees and association rules build logical rules). I recommend Pedro
Domingos book, The master algorithm, in which he describes the "5 machine
learning tribes" (one of them is referred as "the symbolists") and advocates
for a unification of different machine learning algorithms. He even proposes a
particular instance of algorithm that would fulfill these criteria: Markov
logic networks. He has developed an implementation, called Alchemy
([https://alchemy.cs.washington.edu/](https://alchemy.cs.washington.edu/)).

If by symbolic AI we mean GOFAI, expert systems etc, I don't think that there
will be ever a resurgence. But if by symbolic AI we mean machine learning
algorithms that are somehow based on symbolic reasoning, I do think that there
will be a resurgence. In particular, this resurgence will start when: a) Deep
learning arrives to its limit (ie. research gets stuck) and/or b) Someone
finds a scalable and SOTA-ish way to integrate symbols into gradient based
algorithms.

------
sorryforthethro
Random forests produce the same kind of decision trees that used to be hand-
crafted, but admittedly, the ones they generate look distinctly "non-human"

------
iandanforth
Neural representations are messy, this is both a strength and a weakness. It
is a strength because it allows you to easily interpolate in the latent space
of the representations in ways that might not be reflected by the training
data or any rule-set that a human could come up with. This underlies the power
of neural networks to generalize.

Symbolic representations are clean, this is both a strength and a weakness.
You might have perfectly separated categories but the real world frequently
presents inputs that break taxonomies.

We invented symbols like letters and numbers to reduce the complexity of the
real world. Language and mathematics are lossy representations but also
incredibly useful models.

Given the value that symbols and symbolic methods have for us I have little
doubt that they will be an integral part of efficient AI systems in the
future. You could train a neural world model on the ballistic properties of a
rocket, but if it's orders of magnitude more efficient why not learn to
calculate instead?

------
mindcrime
It's really hard to make predictions... especially about the future.[1] But to
the extent that I have anything to say about this, I'll offer this:

1\. For all the accomplishments made with Deep Learning and other "more
modern" techniques (scare quotes because deep learning is ultimately rooted in
ideas that date back to the 1950's), one thing they don't really do (much of)
is what we would call "reasoning". I think it's an open question whether or
not "reasoning" (for the sake of argument, let's say that I really mean
"logical reasoning" here) can be an emergent aspect of the kinds of processes
that happen in artificial neural networks. Perhaps if the network is
sufficiently wide and deep? After all, it appears that the human brain is
"just neurons, synapses, etc." and we manage to figure out logic. But so far
our simulated neural networks are orders of magnitude smaller than a real
brain.

2\. To my mind, it makes sense to try and "shortcut" the development of
aspects of intelligence that _might_ emerge from a sufficiently broad/deep
ANN, by "wiring in" modules that know how to do, for example, first order
logic or $OTHER_THING. But we should be able to combine those modules with
other techniques, like those based on Deep Learning, Reinforcement Learning,
etc. to make hybrid systems that use the best of both worlds.

3\. The position stated in (2) above is neither baseless speculation /
crankery, nor is it universally accepted. In a recent interview with Lex
Fridman, researcher Ian Goodfellow seemed to express some support for the idea
of that kind of "hybrid" approach. Conversely, in an interview in Martin
Ford's book _Architects of Intelligence_ , Geoffrey Hinton seemed pretty
dismissive of the idea. So even some of the leading researchers in the world
today are divided on this point.

4\. My take is that neither "old skool" symbolic AI (GOFAI) nor Deep Learning
is sufficient to achieve "real AI" (whatever that means), at least in the
short-term. I think there will be a place for a resurgence of interest in
symbolic AI, in the context of hybrid systems. See what Goodfellow says in the
linked interview, about how linking a "knowledge base" with a neural network
could possibly yield interesting results.

5\. As to whether or not "all of intelligence" including reasoning/logic could
simply emerge from a sufficiently broad/deep ANN... we only just have the
computing power available to train/run ANN's that are many orders of magnitude
smaller than actual brains. Given that, I think looking for some kind of
"shortcut" makes sense. And if we want a "brain" with the number of neurons
and synapses of a human brain, that takes forever to train, we already know
how to do that. We just need a man, a woman, and 9 months.

[1]: [https://quoteinvestigator.com/2013/10/20/no-
predict/](https://quoteinvestigator.com/2013/10/20/no-predict/)

[2]:
[https://www.youtube.com/watch?v=Z6rxFNMGdn0&feature=youtu.be...](https://www.youtube.com/watch?v=Z6rxFNMGdn0&feature=youtu.be&t=1457)

[3]: [http://book.mfordfuture.com/](http://book.mfordfuture.com/)

~~~
jodrellblank
_if we want a "brain" with the number of neurons and synapses of a human
brain, that takes forever to train, we already know how to do that. We just
need a man, a woman, and 9 months._

Geoff Hinton comments on a Reddit AMA that "The brain has about 10^14 synapses
and we only live for about 10^9 seconds. So we have a lot more parameters than
data. This motivates the idea that we must do a lot of unsupervised learning
since the perceptual input (including proprioception) is the only place we can
get 10^5 dimensions of constraint per second."

That sounds to me like humans don't take "forever to train" and definitely
don't learn from "big data" compared to the size of data we feed into a small
machine neural network. Brains must already have a lot of shortcuts built-in.

(comment is from
[https://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama...](https://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton/clyjogf/)
)

~~~
mindcrime
_humans don 't take "forever to train"_

I was just being glib about that. "Forever" is just hyperbole, but the 10+
some odd years it takes to go from birth to useful for most intellectual tasks
is a pretty long-time in relative terms.

 _Brains must already have a lot of shortcuts built-in._

Oh absolutely. My point is just that there's no reason for us to _not_ pursue
"shortcuts" \- as opposed to trying to build an ANN that's big enough to
essentially replicate the actual mechanics of a real brain.

To extend this overall point though... it may be that as we learn newer/better
algorithms and techniques we find out that you can actually make an ANN that
would, for example, learn to do logical reasoning. And it might do so without
need to use anywhere near the number of neurons and synapses that a real brain
uses. But until such a time as it becomes apparent that this is likely, I
think it's a good idea to continue researching "hybrid" systems that hard-wire
in elements like various forms of symbolic/logical reasoning and anything else
that we at least sorta/kinda understand.

~~~
sgt101
We are often deceived by the fact that Human infants are optimised for
plasticity (I know this is arguable - but it's a reasonable theory) and for
their brain to get through a bipeds birth canal (and subsequently grow). Look
at lambs in contrast (I've been on a sheep farm in Scotland for a couple of
weeks so I've had the opportunity!) Lambs stand up about 3 to 10 minutes after
birth (or there is a problem). They walk virtually immediately after that,
they find the sheep's udder and take autonomous action to suckle within an
hour (normally) and follow their mothers across a field, stream, up a hill
over bridges as soon as they can walk. Within a week they are building social
relations with other sheep and lambs and within three weeks they are charging
round fields playing games that appear pretty complex in terms of different
defined places to run up to and back and so on.

This kind of rapid cognitive development argues strongly (IMO) against the
kind of experimental/experiential training that a tabula-rasa nn approach
would indicate.

Human plasticity and logical reasoning are the apex of other processes and
approaches, I think that because we have so much access (personally through
introspection and socially via children) to models of theses processes, and
the results are so spectacular and intrinsically impressive.

I used to go to the SAB conferences in the 90's, they're still going, but
somewhat diminished I think. This was where the "Sussex School" of AI had it's
largest expression - Phil Husbands, Maggie Boden and John Maynard Smith all
spoke about the bridges between animal cognition and self organising systems.
I am pretty sure that they were all barking up the wrong tree (he he he) but
there was and is a lot of mileage in the approach.

------
mietek
AlphaGo is a hybrid system, using deep reinforcement learning and Monte Carlo
tree search. Tree search dates back to Shannon, before neural networks.
AlphaGo is a triumph of symbolic as well as statistical AI.

------
_delirium
One thing I haven't seen mentioned here yet: symbolic planning is still a
pretty large area. It's not really visible if you look at what people are
starting AI startups around, but there are a bunch of large companies that use
symbolic planning systems, and it's also an active research area, even if not
the most in-vogue one at the moment.

I have no inside information on why they're interested, but it's also
intriguing that DARPA continues to pour money into planning system R&D.

------
eesmith
"fell by the wayside at the beginning of the AI winter"

I believe the various aspects of the Semantic Web are a continuation of
symbolic AI. My two cents as a complete outsider on the topic.

~~~
0815test
They are, but the successful part of the Semantic Web is almost entirely
limited to open-source datasets (grouped under the "Linked Data", or "Linked
Open Data" initiative). That's pretty much the only part of the web that
actually has an incentive to release their info in machine-readable format -
everyone else would rather control the UX end-to-end and keep users dependent
on their proprietary websites or apps.

------
sal9000
A resurgence in interest might be already underway - check for example this
very recent work from Joshua Tenenbaum's lab (MIT):

Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language
Understanding
[https://arxiv.org/abs/1810.02338](https://arxiv.org/abs/1810.02338)

It actually brews on ideas that prof.Tenenbaum has been presenting and
discussing over the past few years.

------
_0ffh
I wonder about the potential of fusing subsymbolic with symbolic systems,
continually learning and updating a set of feature vectors to serve as a
dictionary, translating between the subsymbolic and symbolic parts of an
integrated learning framework. I think of that as analoguous to how the older,
more intuitive parts of the brain and the language-based, reflective, linear,
reasoning parts work together.

------
crististm
I feel that AI in the form of machine learning has the higher ground because,
as an engineer, you have an attack on the problem that gets you moving instead
of contemplating whatifs of symbolic sci-fi.

There is also the question if ML is not simply moving us into a ditch of
optimum locality. I think it does.

I'd love to see symbolic AI take a foothold only to have it explain back its
rationalizations (which ML can't).

~~~
TuringTest
I have this hypothesis that, once the field of ML stabilizes around a mature
industry, we'll start seeing people using symbolic tools to generate
explanations of the concepts learnt by the deep learning networks.

------
wslh
Aren't Mathematica and automated proving systems succesful cases where
symbolic AI happen?

------
daly
I have a long background in Ai (robotics, PDP, expert systems, symbolic math,
vision, planning).

There appear to be two classes of knowledge. Pattern knowledge, such as riding
a bicycle, which we tend to learn in ways similar to the current machine
learning trend. In some ways, this is "deductive knowledge". On the other
hand, Explicit knowledge, such a learning to reason about proofs, which we
tend to learn by teaching is symbolic. In some ways, this is "inductive
knowledge.

The current machine learning trend leans heavily on Pattern knowledge. I don't
believe it will extend into the Explicit knowledge domain. I fear that once
this distinction becomes important it will be seen as a "limit of AI", leading
to yet another AI winter. I tried to bring this up in the Open AI Gym
([https://gym.openai.com/](https://gym.openai.com/)) but it went nowhere.

My experience leads me to hold the very unpopular opinion that AI requires a
self-modifying system. Computers differ from calculators because they can
modify their own behavior. I'm of the opinion that there is an even deeper
kind of self-modification that is important for general AI. The physical
realization of this in animals is due to the ability to grow new brain
connections based on experience. One side-effect is that two identical self-
modifying systems placed in different contexts will evolve differently. (A
trivial example would be the notion of a "table" which is a wood structure to
one system and a spreadsheet to the other system). Since they evolve different
symbolic meanings they can't "copy their knowledge" but have to transfer it by
"teaching".

Self-modification allows for adaptation based on internal feedback rather than
external patterns (e.g. imagination). It allows a kind of hardware
implementation of "genetic algorithms"
([https://en.wikipedia.org/wiki/Genetic_algorithm](https://en.wikipedia.org/wiki/Genetic_algorithm)).
It allows "Explicit knowledge" to be "compiled" into "Pattern knowledge". This
effect can be seen when you learn a skill like music or knitting. After being
taught a manual skill you eventually "get it into your fingers", likely by
self-modification, growing neural pathways.

Of all of the approches I've seen I think Jeff Hawkins of Numenta
([https://www.amazon.com/Intelligence-Understanding-
Creation-I...](https://www.amazon.com/Intelligence-Understanding-Creation-
Intelligent-Machines-ebook/dp/B003J4VE5Y)) is on the right track. However, he
needs to extend his theories to handle self-modification in order to get past
the "pattern knowledge" behavior.

~~~
amy12xx
>> "Pattern knowledge, such as riding a bicycle, which we tend to learn in
ways similar to the current machine learning trend. In some ways, this is
"deductive knowledge". "

Deduction is given a rule and cause, find (deduce) the effect, whereas
Induction is given cause and effect, induce the rule. Isn't machine learning
more inductive (given observations and outcome, induce the decision function)?

------
lquist
We aren’t planning on serious research in the space but it is becoming
increasingly obvious that an expert system is the right approach for our
business going forward fwiw

------
bluejay2387
There already has been, though it's nascent. Check the proceedings of AAAI
2019 or any of the more recent non-NIPS conferences for details.

------
tanilama
Depends whether it could be used to improve performance on existing tasks.

------
laser
Like neural nets managing symbolic systems managing neural nets, or something?

