Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Will there ever be a resurgence of interest in symbolic AI?
225 points by snazz 33 days ago | hide | past | web | favorite | 95 comments
Symbolic AI fell by the wayside at the beginning of the AI winter. More recently, with powerful GPUs making ML and other statistical AI approaches feasible, symbolic AI has not seen anywhere near as much investment.

There are still companies I know of that do symbolic AI (such as https://www.cyc.com), but I very rarely hear of new research in the field.




Employee of Cycorp here. Aside from the current ML hype-train (and the complementary unfashionability of symbolic AI), I think the reason symbolic AI doesn't get as much attention is that it's much more "manual" in a lot of ways. You get more intelligent results, but that's because more conscious human thought went into building the system. As opposed to ML, where you can pretty much just throw data at it (and today's internet companies have a lot of data). Scaling such a system is obviously a major challenge. Currently we support loading "flat data" from DBs into Cyc - the general concepts are hand-crafted and then specific instances are drawn from large databases - and we hope that one day our natural language efforts will enable Cyc to assimilate new, more multifaceted information from the web on its own, but that's still a ways off.

I (and my company) believe in a hybrid approach; it will never be a good idea to use symbolic AI for getting structured data from speech audio or raw images, for example. But once you have those sentences, or those lists of objects, symbolic AI can do a better job of reasoning about them. Pairing ML and symbolics, they can cover each other's weaknesses.


I've been following Cyc since the Lenat papers in the 80s. Wondering what happened to OpenCyc, if you guys changed your thinking about the benefits of an open ecosystem, and if there's any future plans there?


I've only been here for a couple years, so my perspective on that is limited. My understanding is that we still have some form of it available (I believe it's now called "ResearchCyc"), but there isn't a lot of energy around supporting it, much less promoting it.

As to why that is, my best guess is a combination of not having enough man-hours (we're still a relatively small company) and how difficult it has historically been for people to jump in and play with Cyc. There could also be a cultural lack of awareness that people still have interest in tinkering with it, which is something I've thought about bringing up for discussion.

As to the accessibility issue, that's been one of our greatest hurdles in general, and it's something we're actively working on reducing. The inference engine itself is something really special, but in the past most of our contracts have been pretty bespoke; we essentially hand-built custom applications with Cyc at their core. This isn't because Cyc wasn't generic enough, it's because Cyc was hard enough to use that only we could do it. We're currently working to bridge that gap. I'm personally part of an effort to modernize our UIs/development tools, and to add things like JSON APIs, for example. Others are working on much-needed documentation, and on sanding off the rough edges to make the whole thing more of a "product". We also have an early version of containerized builds. Currently these quality-of-life improvements are aimed at improving our internal development process, but many of them could translate easily to opening things up more generally in the future. I hope we do so.


Good write-up, confirms my suspicions. Thanks for your thoughts.


There's an official statement of sorts here: https://www.cyc.com/opencyc/

That meshes with what I've heard at conferences, that Cyc management was worried people were treating OpenCyc as an evaluation version of Cyc, even though it was significantly less capable, and using its capabilities to decide whether to license Cyc or not. The new approach seems to be that you can get a free version of Cyc (the full version) for evaluation or research purposes, and the open-source version was discontinued.


Many aspects of this topic deserves extensive study. For example, ML is all about generalizability, ever since the deep learning flushed the field, it seems like the numeric representation (tensor) always yields better generalizability than the symbolic representation. Is it true? Or, under what circumstances, symbolic representation helps?

In the past couple of years, several papers have shown that predefined symbolic relationship can improve over vanilla DL. For example, recognizing a picture of numerical arithmetic equations and compute the result. This is very difficult for neural networks to parametrize over the pixel space.

Moreover, statistical generalizability is currently all derived from concentration theories. This means that the knowledge encoded by a neural network model depends on the statistical distribution of the data. If two people have two different sets of data, they might end with two very different models. Symbolic generalizability is quite different. A rigorous mathematical proof holds true as long as we all live in the same world with the same set of axioms. For example, no person can appear in two physical places at the same time. Or, if A causes B, A has to occur before B occurs. These knowledge can't be learned through statistical methods with no symbolic priors. We postulate the logic first, then verified through observations.

At last, problems that statistical learning handles well so far are essentially interpolations of the collected data. Being able to extrapolate well is still an unknown. Will inductive logic programing work better in this scenario?

Symbolic is the signature of human intelligence. All of our scientific breakthroughs are encoded by symbols, even the DL (deep learning) stuffs. It won't be replaced by the numerical paradigm completely in any time soon.


What kind of experiments have you guys done that combine symbolic and statistical/ML methods? It sounds like an area ripe for research


I've built Bayesian non-parametric methods that performed inference on certain formulae, FOPL subsets or even Turing-complete programs. IMHO, it's a very exciting field that will bloom midterm.


This sounds fun. Do you mind giving a longer description?


I can't say much due to contract. But I can point you to a good source to get started:

https://v1.probmods.org/learning-as-conditional-inference.ht...


I know we use ML to "grease the wheels" of inference; i.e., Cyc gains an intuition about what kinds of paths of reasoning to follow when searching for conclusions. I don't know of any higher-level hybridization experiments; I think we only have one ML person on staff and mostly our commercial efforts focus on accentuating what we can do that ML can't, so we haven't had the chance to do many projects where we combine the two as equals.


To clarify the above:

"Cyc gains an intuition about what kinds of paths of reasoning to follow when searching for conclusions"

The possible paths come purely from symbolics. But that creates a massive tree of possibilities to explore, so ML is used simply to prioritize among those subtrees.


Basically you are learning the heuristic? Do you have any public information on that? That something I have always wanted to work on and really think it could be a shortcut to AGI...


Hmmm.

> I (and my company) believe in a hybrid approach

> I don't know of any higher-level hybridization experiments

That contradiction and the admission that Cyc only has "one ML person on staff" signals to me, an outsider, that the belief of parity between Machine Learning and "Symbolic" might be predicated more on faith than on reason.


I would say "more on theory than on empirical evidence". It's entirely reasonable; the way your eye "thinks" is entirely different from how your higher cognition "thinks", but you need both. If you want something more concrete, here's a recent experiment done by MIT in this realm:

https://news.mit.edu/2019/teaching-machines-to-reason-about-...

We aren't an ML shop ourselves; we don't claim to be. Given that we have around 100 people, we focus on what we have that's special instead of trying to compete in an overcrowded market. The idea of hybrid AI is something we see as a future part for us to play in the bigger picture of machine intelligence.


Wow, I should have read further ahead in the comments, before dumping my first thoughts [1] as a standalone. How do you interface between the distinct parts of your machinery? Do you use deeper level neural network representations/activities as symbol embeddings?

[1] https://news.ycombinator.com/item?id=19717680


We're beginning to run up against what I may not be allowed to talk about :)

But I will affirm that Cyc is fundamentally symbolics-based. We don't position ourselves as anti-ML, because it's seriously good at a certain subset of things, but Cyc would still be fully-functional without any ML in the picture at all.



I wish Eurisko got more love back in the days...

I'd love to experiment with an automated planner (a good old symbolic AI technique) but use deep learning to design the heuristic. It feels like a lot of reinforcement learning techniques are becoming close to this kind of things.

AlphaGo was a rough implementation of that. Do you know of some efforts from symolic AI in that respect?

And do you still accept candidates ? :-)


Doug loves to reminisce about that battling ships game; if I didn't know better I'd think he was prouder of that tournament than of Cyc itself ;)

> And do you still accept candidates ? :-)

If you mean job candidates, then yes, we definitely do!


Well does Cyc have similar success at overperforming humans?

I am employed now but I'll probably send a CV when I am looking for a change!


I don't honestly know if we've done any comparable experiments with it; we still operate somewhat like a research shop, but we've been dependent on real-world contracts since the 90s so we haven't had as much opportunity for pure research. Not for a lack of interest, of course.


I'd love to see you opposing alpha star in a StarCraft tournament. If you have Doug's ear, pitch it to him, he may miss playing with battlecruisers :-)


I'm not an expert on this, but here's my current understanding:

Symbolic reasoning/AI is fantastic when you have the right concepts/words to describe a domain. Often, the hard ("intelligent") work of understanding a domain and distilling its concepts need to be done by humans. Once this is done, it should in principle be feasible to load this "DSL" into a symbolic reasoning system, to automate the process of deduction.

The challenge is, what happens when you don't have an appropriate distillation of a complex situation? In the late eighties and early nineties, Rodney Brooks and others [1] wrote a series of papers [2] pointing out how symbols (and the definiteness they entail) struggle with modeling the real world. There are some claimed relations to Heideggerian philosophy, but I don't grok that yet. The essential claim is that intelligence needs to be situated (in the particular domain) rather than symbolic (in an abstract domain). The "behavior driven" approach to robotics is stems from that cauldron.

[1]: Authors I'm aware of include Philip Agre, David Chapman, Pattie Maes, and Lucy Suchman.

[2]: For a sampling, see the following papers and related references: "Intelligence without reason", "Intelligence without representation", "Elephants don't play chess".


The essential claim is that intelligence needs to be situated (in the particular domain) rather than symbolic (in an abstract domain).

I think there is something (a lot) to this. Consider how much of our learning is experiential, and would be hard to put into a purely abstract symbol manipulating system. Taking "falling down" for example. We (past a certain age) know what it means to "fall", because we have fallen. We understand the idea of slipping, losing your balance, stumbling, and falling due to the pull of gravity. We know it hurts (at least potentially), we know that skinned elbows, knees, palms, etc. are a likely consequence, etc. And that experiential learning informs our use of the term "fall" in metaphors and analogies we use in other domains ("the market fell 200 points today, on news from China...") and so on.

This is one reason I like to make a distinction between "human level" intelligence and "human like" intelligence. Human level intelligence is, to my way of thinking, easier to achieve, and has arguably already been achieved depending on how you define intelligence. But human like intelligence, that features that understanding of the natural world, some of what we call "common sense", etc., seems like it would be very hard to achieve without an intelligence that experiences the world like we do.

Anyway, I'm probably way off on a tangent here, since I'm really talking about embodiment, which is related to, but not exact the same as, situated-ness. But that quote reminded me of this line of thinking for whatever reason.


I'm not into AI, but it's been a while that from hearing of it I've been percieving that there's quite a gap between AI and human intelligence, which is embodied cognition. It appears to me that human reasoning concepts are vastly sized and paced by the physical and biological world, while this information is not accessible to a highly computational AI.

E.g. human sizing of time is highly linked to physiological timing, may it only be heartbeat pace. More generally, all emotional input can gear reasoning (emotional intelligence).

Only my 2c on this. Not sure how accurate it is.


I've always agreed with this.

The amount of confusion I see between two people from different cultures speaking the same language is amazing. 70% of communication is body language. The rest appears to be shared assumptions about what the other person just said.

I don't think we'll ever be able to have a conversation with a dolphin. We know they talk to each other, we know they're able to interact with us, but how would we ever communicate with them? Their world is so different from ours. To use the example above, a dolphin cannot "fall down", so any language concepts that we have around "falling" will be impossible for them to grok. Likewise we won't have mental concepts around sonar that they use every day, and so won't understand what they mean when they refer to that. We may be able to get to "hello, my name is Alice", but beyond that... nope.

Same with "conversational" AI - it's going to need to understand what its like to have a body, so it can understand all the language around bodies. Simulating that, and being able to make references to "falling over" as a brain-in-a-box, is going to lead to misunderstanding, for exactly the reasons you describe.

I hadn't thought about the measurements aspect, but it's true. There's been some research into trees communicating - could be a classic example. They talk (via fungal networks in their roots, apparently), but so slowly that we can't hear them.

And yes, human emotion is linked to human physiology, and hormones. A lot of human communication is about recognising and empathising with human emotion. That's going to be a tough thing for a machine to do...


I think the opposite is true. Humans think in terms of symbols to model the world around him. A child is born knowing nothing, a completely blank slate, and slowly he learns about his surroundings. He discovers he needs food, he needs to be protected and cared for. He discovers he doesnt like pain. If you talk to a 3 year old child you can have a fairly intelligent conversation about his parents, about his sense of security because this child has built a mental model of the world as a result of being trained by his parents. This kind of training requires context and crossreferencing of information which can only be done by inferencing. You cant train a child by flashing 10,000 pictures at him because pictures are not experience, even adults can be fooled by pictures which are only a 2D representation of 3D concepts of 3D space. So all these experiences that a small child has of knowing about the world come to him symbolically, these symbols model the world and give even a small child the ability to reason about external things and classify them. This is human level intelligence.

Human like intelligence is training a computer to recognize pixel patterns in images so it can make rules and inferences about what these images mean. This is human like intelligence as the resulting program can accomplish human like tasks of recognition without the need for context on what these images might mean. But there is no context involved about any kind of world, this is pure statistical training.


> Humans think in terms of symbols to model the world around him. A child is born knowing nothing, a completely blank slate, and slowly he learns about his surroundings.

Actually, the research has found that new born infants can perceive all sorts of things, like human faces and emotional communication. There is also a lot of inborn knowledge about social interactions and causality. The embodied cognition idea is looking at how we experience all that.

By the way, Kant demonstrated a couple of centuries ago that the blank slate idea was unworkable.


>Actually, the research has found that new born infants can perceive all sorts of things, like human faces and emotional communication.

yes, thats called sensory input.... a child deprived of sensory input when newborn can die because there is nothing there to show the baby of its existence, this the cause of crib death (notice that crib death is not called arm death because a baby doesnt die in the mothers arms)

>There is also a lot of inborn knowledge about social interactions and causality.

no, babies are not born with any knowledge at all of even the existence of society or beings. causality is learned from the result of human experience, causality is not known at birth


There's no reason to think the human brain learns things using purely statistical methods, and then turn around and try to argue that evolution cannot encode the same information into the structure of a baby using those exact same methods. Humans have lots of instinctual knowledge; geometry, facial recognition; kinesthetics, emotional processing; affinity for symbolic language and culture, just to name a few. What we don't have is knowledge of the specific details needed for socialization and survival.


Hume successfully argued that it's impossible to get causality from experience, because causes aren't in experience, only correlations or one event following another. You need a mental concept of causality to draw those connections. Hume called it a habit. Kant argued that it had to be one of the mental categories all humans structure the world of experience with. Space and time are two others. You don't get those concepts from raw sensory data. We have to be born with that capability.


This is human like intelligence as the resulting program can accomplish human like tasks of recognition without the need for context on what these images might mean.

That's a very limited subset of what I mean by "human like intelligence". And within that specific subset, yes, AI/ML can and have achieved "human level" results. But that same ML model can recognize cats in vectors of pixels doesn't know anything about falling down. It's never tripped, stumbled, fallen, skinned it's palms, and felt the pain and seen the blood that results. It's never know the embarrassment of hearing the other AI kids laughing at it for falling, or the shame of having it's AI parent shake it's head and look away after it fell down. It's never been in love with the pretty girl AI (or pretty boy AI) and been had to wonder "did he/she see me fall and bust my ass?"

Now giving a computer program some part of the experience of falling we could do. We could load the AI into a shell of some sort, and pack it with sensors: GPS receiver, accelerometers, ultrasonic distance detectors, cameras, vibration sensors, microphones, barometric pressure sensor, temperature detector, etc., and then shove it off a shelf. Now it would "know" something about what falling actually is. And that's what I mean by the need for experiental learning in a situated / embodied setting.

While it might be possible in principle to get that knowledge into a program in some other way, my suspicion is that it would be prohibitively difficult to the point of being effectively impossible.


You've obviously never had children.


[citation needed]


My 2c:

I think the key term here is "concept formation" as well as "knowledge representation". How do we form concepts, and how are they represented internally to make them tractable?

Symbols are one way to represent concepts (or rather, point to them). But with symbols we are limited to surface-level transformations according to a syntax (I'm pretty sure Chomsky said something similar?). What do the concepts actually point to, though, and can we represent that underlying structure programmatically?

As I wrote in another comment, I'm very inspired by the conceptual spaces model:

https://mitpress.mit.edu/contributors/peter-gardenfors

https://www.youtube.com/watch?v=Y3_zlm9DrYk

Could someone please steelman my thinking here a bit? Would love to advance my own thinking on this matter.


I used to think ML was missing the ability to formulate abstractions until I read about autoencoders and GAN. If you have not, I suggest looking into it.

In a well designed autoencoder, the network ends up discovering an abstract representation of inputs and a conceptual space to express it.


I posted a comment about the Heidegger philosophy side of things a few days ago. Winograd’s book Understanding Computers and Cognition is my reference - explains the connection really well. He gives the example of hammering to argue that common sense human intelligence is based on situatedness (related to heidegger’s “being-in-the-world”) as opposed to manipulating symbolic representations. While engaged in hammering, you don’t have a mental model of a hammer top of mind.

Original comment:

https://news.ycombinator.com/item?id=19703867


An original source for the Heideggerian critique of symbolic AI projects is Hubert Dreyfus, a philosophy professor at MIT who specialized in Heidegger and argued that his colleagues in the CS department were codifying just the kind of naive views on cognition that Heidegger spent his life criticizing.

See his books “What Computers Can’t Do”, “Being-in-the-world”, and the paper “Why Heideggerian AI Failed and Why Fixing It Would Require Making it More Heideggerian” (something like that).

A basic point is that ordinary human coping does not involve conceptual thinking, schematic rules, or the manipulation of symbols. It’s sort of like “Thinking Fast and Slow”.

We do not fundamentally live by constantly consulting our inner symbolic representation of the world, though we do that too. The more fundamental way of being is to just cope and care directly without explicit cognitive representation.

So I could attempt to codify an “expert system” for my way of coping with and caring for my cat, let’s say. But it would only be a kind of symbolic ghost of my real way of being, and it would never be sufficient. The more precise I tried to make it, the more complex it would become, until it became a gigantic mess, because it’s fundamentally an inaccurate model.

Dreyfus “Being-in-the-world” brings up many examples of the way the intelligence of daily life is informal, unconscious, and nonsymbolic. The way we maintain distance from other bodies which is only roughly approximated by the idea of a “personal space”, or the ways in which we live out masculinity and femininity.

“There are no beliefs to get clear about; there are only skills and practices. These practices do not arise from beliefs, rules, or principles, and so there is nothing to make explicit or spell out. We can only give an interpretation of the interpretation already in the practices.”

“Being and Time seeks to show that much of everyday activity, of the human way of being, can be described without recourse to deliberate, self-referential consciousness, and to show how such everyday day activity can disclose the world and discover things in it without containing any explicit or implicit experience of the separation of the mental from the world of bodies and things.”

“The traditional approach to skills as theories has gained attention with the supposed success of expert systems. If expert systems based on rules elicited from experts were, indeed, successful in converting knowing-how into knowing-that, it would be a strong vindication of the philosophical tradition and a severe blow to Heidegger's contention that there is no evidence for the traditional claim that skills can be reconstructed in terms of knowledge. Happily for Heidegger, it turns out that no expert system can do as well as the experts whose supposed rules it is running with great speed and accuracy. Thus the work on expert systems supports Heidegger's claim that the facts and rules ‘discovered’ in the detached attitude do not capture the skills manifest in circumspective coping.”


> Dreyfus “Being-in-the-world” brings up many examples of the way the intelligence of daily life is informal, unconscious, and nonsymbolic.

And yet he conveys these examples using somewhat formal, conscious and symbolic method - printed words.


The question then is whether the kind of work a philosopher supposedly does—formal, conscious, symbolic—is especially fundamental to intelligence. Like, is the mind in its basic function similar to an analytic philosopher or logician? In order to make artificial intelligence, should we try to develop a simulation of a logician?

But not even philosophers actually work in the schematic way of an AI based on formal logic...


I think dichotomy between "formal, conscious, symbolic" and "informal, unconscious, non-symbolic" may be false. We will find out in a few hundred years when AI matures. Of course I don't think we will have an AGI based on first order logic a la 1960s efforts. On the other hand, deep neural networks are not that far from "informal, unconscious, non-symbolic", but are still based on formal and symbolic foundations.


Well, every dichotomy is false, probably even the dichotomy between dichotomies and non-dichotomies...

Dreyfus’s critique is about the first order (or whatever) logic programs, and I don’t think neural nets are cognitivistic in the same way, but there’s also the point that until they live in the human world as persons they will never have “human-like intelligence”.

I think it’s interesting to think of AI in a kind of post-Heideggerian way that includes the possibility that it can be desirable or necessary for us human beings to submit and “lower” ourselves to robotic or “artificial” systems, reducing the need for the AIs to actually attain humanistic ways of being. If the self-driving cars are confused by human behaviors, we can forbid humans on the roads, let’s say. Or humans might find it somehow nice to let themselves act within a robotic system, like maybe the authentic Heideggerian being in the world is also a source of anxiety (anxiety was a big theme for Heidegger after all).


Hybrid approaches have been getting some interesting results lately[0], and will probably continue to do so, but the approaches between statistical and symbolic AI are so different that these are essentially cross-disciplinary collaborations (and each hybrid system I've seen is essentially a one-off that occupies a unique local maxima).

I suspect that eventually there will be an "ImageNet Moment" of sorts starring a statistical/symbolic hybrid system and we'll see an explosion of interest in a family of architectures (but it hasn't happened yet).

[0] http://news.mit.edu/2019/teaching-machines-to-reason-about-w...


Well, symbolic AI people also work on probabilistic reasoning. The production level example is ProbLog[1][2], used in genetics. There is even DeepProbLog[3], adding deep learning in the mix. The only problem both implemented in Python. I hope there will be alternatives in native languages. Scryer Prolog[4] might become this implementation one day (it is written in Rust). Another approach is to extend vanilla Prolog, like cplint[5] does.

[1] https://dtai.cs.kuleuven.be/problog/

[2] https://bitbucket.org/problog/problog

[3] https://bitbucket.org/problog/deepproblog

[4] https://github.com/mthom/scryer-prolog

[5] https://github.com/friguzzi/cplint


Symbolic AI and ML/DL AI are two entirely different technologies with different capabilities and applications, that both happen to be called "AI" for mostly cultural reasons. The success of one is probably unrelated to the success or failure of another. In most ways, symbolic AI has "faded" simply in that we now take most of its capabilities for granted; e.g., it never strikes you as odd that Google Maps can use your phone CPU to instantly plot a course for a cross-country roadtrip if you so desire, but that sort of thing was a major research project way back when.

In contrast, ML/DL AI is still shiny and new and we have a much less clear grasp of what its ultimate capabilities are, which makes it a ripe target for research.


Very much agree. To expand on this, check out Stuart Russell and Peter Norvigs intro book on AI. It supports your comment as there's an entire section (chapter?) on path planning and the like.


I expect hybrid deep learning and symbolic AI systems to be highly relevant. My background which is what I base the following opinions on: I spent the 1980s mostly doing symbolic AI except for 2 years of neural networks (wrote first version of SAIC Ansim neural network library, supplied the code for a bomb detector we did for the FAA, on DARPA neural net advisory panel for a year). For the last 6 years, just about 100% all-in working with deep learning.

My strong hunch is that deep learning results will continue to be very impressive and that with improved tooling basic applications of deep learning will become fairly much automated, so the millions of people training to be deep learning practitioners may have short careers; there will always be room for the top researchers but I expect model architecture search, even faster hardware, and AIs to build models (AdaNet, etc.) will replace what is now a lot of manual effort.

For hybrid systems, I have implemented enough code in Racket Scheme to run pre-trained Keras dense models (code in my public github repos) and for a new side project I am using BERT, etc. pre-trained models wrapped with a REST interface, and my application code in Common Lisp has wrappers to make the REST calls so I am treating each deep learning model as a callable function.


millions of people training to be deep learning practitioners may have short careers

Have database admins disappeared? How about front end devs?


I don't know if database admins have disappeared. But we have never needed one to take care of our DynamoDB tables and our "serverless" Aurora databases.

Even though I'm pretty sure AWS needs a lot of them; although not one (or more) for each and every single one of their customers.


I sure your queries are horrible :p DBA's are generally not just scoped to keeping the RDBMS alive.


Haha ^^ My SQL skills are indeed quite limited. But most of my stuff relies on DynamoDB, mostly in a strictly key-value fashion.

Still; queries are a lot less to learn and much easier to fix when I do them wrong then also queries + taking care of (maintaining, scaling, fixing, backing up,...) the database itself.


What do you think about conceptual AI? AI models that are able to create, test, modify ideas/concepts on the fly. We need a breakthrough...


(not GP, just an interested passer-by)

Based on my limited contact with AI during the aughts' semantic web/description logic symbolic heyday (we were exploring ways in which multiple communicating knowledge bases might resolve conflicting information): symbolics with uncertainty is too hard, maybe in a very far future. When ML and symbolics are successfully put together, I expect the symbolics to focus on what they do best: ignore uncertainty and change, leave all that to the ML part. For example, when you do the "obvious thing" and run symbolic reasoning on top of classifications supplied by ML (which is maybe a naive approach not working out at all, I have no idea), you would feed back corrective training updates into the classification layer instead of softening the concepts when the outcome of reasoning is not satisfactory.

Imaginary toy example: if your rules state that cars always stop at stop signs, but the observed reality is that this hardly ever happens, this first iteration of ML-fed symbolics would not adopt by adjusting the rules to "cars carefully approach stop signs, but don't actually stop", it would eventually adopt by classifying the red octagonal shape as a yield sign, keeping the rules for stop signs as is (but never seeing any).


A lot of what's been going on in the PL community would have been called "symbolic AI" in the 80's. Program synthesis, symbolic execution, test generation, many forms of verification --- all involving some kind of SAT or constraint-solving.


Some of these are still alive in AI as well, although often under parallel names. For example, genetic programming (an old field of AI) has broadened to be more method-agnostic and doesn't really care if your automated programming method is "genetic" anymore or not, so has converged to some extent with program synthesis. As a result you can publish this kind of thing in both PLs and AI conferences nowadays, with a bit of translation. (The overlap of people who actually do publish in both types of venues is less than I might hope in an ideal world, but a handful of people do.)


What is the PL community?


Programming Languages


Please check out the Genesis Group at MIT's CSAIL. Or Patrick Winston's Strong Story Hypothesis. Or Bob Berwick. Many at MIT are still working through the 80's winter, without the confirmation bias of Minsky and Papert's Perceptrons with all the computation power and none of the theory (now called neural nets). Or any of the papers here: https://courses.csail.mit.edu/6.803/schedule.html

Or the work of Paul Werbos, the inventor of backpropagation, was heavily influenced by -- though itself perhaps outside the cannon of -- strictly symbolic approaches


Let's see.

Databases. (Isn't Terry Winograd's SHDRLU conversation that kind of conversation that you have with the SQL monitor?) Compilers. (e.g. programming languages use theories developed to understnad human languages) Business Rules Engines. SAT/SMT Solvers. Theorem proving.

There is sorta this unfair thing that once something becomes possible and practical it isn't called A.I. anymore.


Well when we apply materials developed for space flight to kitchen equipment, it stops being called space tech.

Space tech is when you try to go to space. AI is when you aim at AGI.


The big win in symbolic AI has been in theorem proving. In spaces which do have a formal structure underneath, that works well. In the real world, not so much.


>> There are still companies I know of that do symbolic AI (such as https://www.cyc.com), but I very rarely hear of new research in the field.

I can't answer your main question but, as a practical matter, if you don't hear of new research in the field it probably means you're not tuned in to the right channels, which is to say, the proceedings of the main AI conferences that cover a broad range of subjects: AAAI, IJCAI, IROS, ICRA, plus the more specialised ones, like AAMAS, ICAPS, UAI, KR, JELIA, etc. Any interestting research in symbolic AI is going to be there.

If you get your news from the tech press and the internet, you won't hear of any of that stuff and won't even know it's going on, because, let's face it- the large tech companies are championing a very specific kind of AI (statistical machine learning with deep neural nets) and, well, they have the airwaves, they have the hype engines and their noise is drowning out all other information on the same channels.

For the record, work on symbolic AI is still going on. For instance, the subject area of my PhD research is in Inductive Logic Programming, a branch of symbolic, logic-based machine learning. This is not just active, but going strong with a recent explosion on research in learning Answer Set Programs and the work of my group, on a new ILP technique, Meta-Interpreterive Learning. If we're inventing new stuff, we're still alive and well.


I think the biggest problem with symbolic AI systems is that you can only program them with "facts" that have bubbled up into consciousness. Most of human behavior is unconscious. Statistical AI tends to observe what people do instead, which is a better representation of how they will actually behave in the real world. Statistical AI trained on self-reported data (i.e. asking people what they think instead of observing what they do) has many of the same problems as symbolic AI.

This is also the biggest weakness of statistical AI, and why so many people are mad at companies that employ it. If you train on what people do, you also capture all the behavior that people wish they didn't do. Thus you get all the racism, sexual fetishes, discord, unpopular views, irrational views, tribalism, and general stupidity that folks would prefer to pretend doesn't exist, but shows up all the time to an objective observer of humanity.


Neural networks have been around for a much longer time than they have been popular/practical/commercially-viable. It just so happens that they can be accelerated using dedicated floating point computing hardware --something GPUs are very good at.

I often think about symbolic AI, and how it relates to Boolean satisfiability. This is an integer problem. We don't seem to have a similar technology to GPUs that would be transferable to the problem domain. If we had that, maybe things would be different. I looked into this a bit, and Microsoft seems to have put some resources into a SAT computing ASIC.

To get the same kind of progress in symbolic AI, perhaps we need massively parallel/scalable SAT solving hardware. The gaming industry gave us the initial floating point hardware; maybe the cryptocurrency industry will gift us with analogous integer hardware that could push symbolic AI further.


I keep waiting for symbolic AI to really get traction in the area of software verification. Either a great language/ compiler that can really prove it has no bugs, or a testing approach that takes a binary and proves that it has no bugs of a certain kind. What we've been getting, though, is evidence that the space of possibilities quickly outpaces our capacity to search it, and so logical methods don't compare well to statistical approaches. I would never close the door on it though, since programming is basically thinking symbolically. It's possible that as the industry gets more concerned with privacy and security that the class of applications that need this kind of provability will drive adoption at the current state of the art.


Ever heard of the halting problem?


The halting problem only says that a program cannot decide every possible program, it does not say much about the programs you are likely to write day to day for the vast majority of programmers.


What do you guys think of the conceptual spaces approach of Peter Gärdenfors?

See eg here:

https://mitpress.mit.edu/contributors/peter-gardenfors

https://www.youtube.com/watch?v=Y3_zlm9DrYk

From reading some papers, it seems his approach is a third way beyond symbolic and connectionist.

Indeed the title of that lecture is "The Geometry of Thinking: Comparing Conceptual Spaces to Symbolic and Connectionist Representations of Information"

Would you say conceptual spaces is a third way and how does it apply to the topic discussed in this thread?


I'd say it's symbolic, but not combinatorial. Tldr: it looks a lot closer to symbolic than to connectionist, but it seems a promising new approach within symbolic methods.

What we call symbolic AI usually makes inference by exploring the space of possibilities generated by recombining the basic symbols of a (fixed) domain language.

Gärdenfors approach has a lot of this, in that it has a symbolic representation of data, a well-defined set of symbols that stand for objects in the observed domain (animals, in the example given); additionally, each symbol has a numerical value which represents how much of each property the object possesses.

This is somewhat similar to the knowledge systems of the 70s and 80s for incomplete, approximate rule-bad reasoning. But those were problematic because it was very difficult to do reasoning with their numerical values. When combining facts within the database, the respective combinations of their numerical values often had nonsensical meanings. The algebras used in those systems were not a good fit.

If Gärdenfors is right and concepts can be treated as mathematical spaces with convex regions, his approach could solve some major problems of those systems that made them impractical, and maybe bring them to prominence again.


People tend to do a hard distinction between symbolic AI and machine learning, but actually some machine learning algorithms are based on symbols (eg. decision trees and association rules build logical rules). I recommend Pedro Domingos book, The master algorithm, in which he describes the "5 machine learning tribes" (one of them is referred as "the symbolists") and advocates for a unification of different machine learning algorithms. He even proposes a particular instance of algorithm that would fulfill these criteria: Markov logic networks. He has developed an implementation, called Alchemy (https://alchemy.cs.washington.edu/).

If by symbolic AI we mean GOFAI, expert systems etc, I don't think that there will be ever a resurgence. But if by symbolic AI we mean machine learning algorithms that are somehow based on symbolic reasoning, I do think that there will be a resurgence. In particular, this resurgence will start when: a) Deep learning arrives to its limit (ie. research gets stuck) and/or b) Someone finds a scalable and SOTA-ish way to integrate symbols into gradient based algorithms.


Random forests produce the same kind of decision trees that used to be hand-crafted, but admittedly, the ones they generate look distinctly "non-human"


Neural representations are messy, this is both a strength and a weakness. It is a strength because it allows you to easily interpolate in the latent space of the representations in ways that might not be reflected by the training data or any rule-set that a human could come up with. This underlies the power of neural networks to generalize.

Symbolic representations are clean, this is both a strength and a weakness. You might have perfectly separated categories but the real world frequently presents inputs that break taxonomies.

We invented symbols like letters and numbers to reduce the complexity of the real world. Language and mathematics are lossy representations but also incredibly useful models.

Given the value that symbols and symbolic methods have for us I have little doubt that they will be an integral part of efficient AI systems in the future. You could train a neural world model on the ballistic properties of a rocket, but if it's orders of magnitude more efficient why not learn to calculate instead?


It's really hard to make predictions... especially about the future.[1] But to the extent that I have anything to say about this, I'll offer this:

1. For all the accomplishments made with Deep Learning and other "more modern" techniques (scare quotes because deep learning is ultimately rooted in ideas that date back to the 1950's), one thing they don't really do (much of) is what we would call "reasoning". I think it's an open question whether or not "reasoning" (for the sake of argument, let's say that I really mean "logical reasoning" here) can be an emergent aspect of the kinds of processes that happen in artificial neural networks. Perhaps if the network is sufficiently wide and deep? After all, it appears that the human brain is "just neurons, synapses, etc." and we manage to figure out logic. But so far our simulated neural networks are orders of magnitude smaller than a real brain.

2. To my mind, it makes sense to try and "shortcut" the development of aspects of intelligence that might emerge from a sufficiently broad/deep ANN, by "wiring in" modules that know how to do, for example, first order logic or $OTHER_THING. But we should be able to combine those modules with other techniques, like those based on Deep Learning, Reinforcement Learning, etc. to make hybrid systems that use the best of both worlds.

3. The position stated in (2) above is neither baseless speculation / crankery, nor is it universally accepted. In a recent interview with Lex Fridman, researcher Ian Goodfellow seemed to express some support for the idea of that kind of "hybrid" approach. Conversely, in an interview in Martin Ford's book Architects of Intelligence, Geoffrey Hinton seemed pretty dismissive of the idea. So even some of the leading researchers in the world today are divided on this point.

4. My take is that neither "old skool" symbolic AI (GOFAI) nor Deep Learning is sufficient to achieve "real AI" (whatever that means), at least in the short-term. I think there will be a place for a resurgence of interest in symbolic AI, in the context of hybrid systems. See what Goodfellow says in the linked interview, about how linking a "knowledge base" with a neural network could possibly yield interesting results.

5. As to whether or not "all of intelligence" including reasoning/logic could simply emerge from a sufficiently broad/deep ANN... we only just have the computing power available to train/run ANN's that are many orders of magnitude smaller than actual brains. Given that, I think looking for some kind of "shortcut" makes sense. And if we want a "brain" with the number of neurons and synapses of a human brain, that takes forever to train, we already know how to do that. We just need a man, a woman, and 9 months.

[1]: https://quoteinvestigator.com/2013/10/20/no-predict/

[2]: https://www.youtube.com/watch?v=Z6rxFNMGdn0&feature=youtu.be...

[3]: http://book.mfordfuture.com/


if we want a "brain" with the number of neurons and synapses of a human brain, that takes forever to train, we already know how to do that. We just need a man, a woman, and 9 months.

Geoff Hinton comments on a Reddit AMA that "The brain has about 10^14 synapses and we only live for about 10^9 seconds. So we have a lot more parameters than data. This motivates the idea that we must do a lot of unsupervised learning since the perceptual input (including proprioception) is the only place we can get 10^5 dimensions of constraint per second."

That sounds to me like humans don't take "forever to train" and definitely don't learn from "big data" compared to the size of data we feed into a small machine neural network. Brains must already have a lot of shortcuts built-in.

(comment is from https://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama... )


humans don't take "forever to train"

I was just being glib about that. "Forever" is just hyperbole, but the 10+ some odd years it takes to go from birth to useful for most intellectual tasks is a pretty long-time in relative terms.

Brains must already have a lot of shortcuts built-in.

Oh absolutely. My point is just that there's no reason for us to not pursue "shortcuts" - as opposed to trying to build an ANN that's big enough to essentially replicate the actual mechanics of a real brain.

To extend this overall point though... it may be that as we learn newer/better algorithms and techniques we find out that you can actually make an ANN that would, for example, learn to do logical reasoning. And it might do so without need to use anywhere near the number of neurons and synapses that a real brain uses. But until such a time as it becomes apparent that this is likely, I think it's a good idea to continue researching "hybrid" systems that hard-wire in elements like various forms of symbolic/logical reasoning and anything else that we at least sorta/kinda understand.


We are often deceived by the fact that Human infants are optimised for plasticity (I know this is arguable - but it's a reasonable theory) and for their brain to get through a bipeds birth canal (and subsequently grow). Look at lambs in contrast (I've been on a sheep farm in Scotland for a couple of weeks so I've had the opportunity!) Lambs stand up about 3 to 10 minutes after birth (or there is a problem). They walk virtually immediately after that, they find the sheep's udder and take autonomous action to suckle within an hour (normally) and follow their mothers across a field, stream, up a hill over bridges as soon as they can walk. Within a week they are building social relations with other sheep and lambs and within three weeks they are charging round fields playing games that appear pretty complex in terms of different defined places to run up to and back and so on.

This kind of rapid cognitive development argues strongly (IMO) against the kind of experimental/experiential training that a tabula-rasa nn approach would indicate.

Human plasticity and logical reasoning are the apex of other processes and approaches, I think that because we have so much access (personally through introspection and socially via children) to models of theses processes, and the results are so spectacular and intrinsically impressive.

I used to go to the SAB conferences in the 90's, they're still going, but somewhat diminished I think. This was where the "Sussex School" of AI had it's largest expression - Phil Husbands, Maggie Boden and John Maynard Smith all spoke about the bridges between animal cognition and self organising systems. I am pretty sure that they were all barking up the wrong tree (he he he) but there was and is a lot of mileage in the approach.


AlphaGo is a hybrid system, using deep reinforcement learning and Monte Carlo tree search. Tree search dates back to Shannon, before neural networks. AlphaGo is a triumph of symbolic as well as statistical AI.


One thing I haven't seen mentioned here yet: symbolic planning is still a pretty large area. It's not really visible if you look at what people are starting AI startups around, but there are a bunch of large companies that use symbolic planning systems, and it's also an active research area, even if not the most in-vogue one at the moment.

I have no inside information on why they're interested, but it's also intriguing that DARPA continues to pour money into planning system R&D.


"fell by the wayside at the beginning of the AI winter"

I believe the various aspects of the Semantic Web are a continuation of symbolic AI. My two cents as a complete outsider on the topic.


They are, but the successful part of the Semantic Web is almost entirely limited to open-source datasets (grouped under the "Linked Data", or "Linked Open Data" initiative). That's pretty much the only part of the web that actually has an incentive to release their info in machine-readable format - everyone else would rather control the UX end-to-end and keep users dependent on their proprietary websites or apps.


But without learning, or resolving, or trying to resolve the issues that killed the 5th Generation and GOFAI in the first place.

"let's use Description Logic and F-logic because we both cannot do the science or maths to decide between them as a community (oh the irony) and hope that because they are tractable the fact that they are not expressive isn't going to matter"

5 years and £250m tax money later...

"It turns out that it matters, and there isn't an alternative, and we don't know what to do"

Meanwhile on another planet, AI researchers :

"Answersets and FOL offer a potential solution, we just have to slog away on a shoestring for 15 years to get there."


There is some interesting work using graph embeddings (like word embeddings)to add data and relations to semantic web style knowledge graphs.


Yes, this. Alternatively, what's the logical next step once the semantic web is realized? Ask yourself where wikidata is going in the long run.


A resurgence in interest might be already underway - check for example this very recent work from Joshua Tenenbaum's lab (MIT):

Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding https://arxiv.org/abs/1810.02338

It actually brews on ideas that prof.Tenenbaum has been presenting and discussing over the past few years.


I wonder about the potential of fusing subsymbolic with symbolic systems, continually learning and updating a set of feature vectors to serve as a dictionary, translating between the subsymbolic and symbolic parts of an integrated learning framework. I think of that as analoguous to how the older, more intuitive parts of the brain and the language-based, reflective, linear, reasoning parts work together.


I feel that AI in the form of machine learning has the higher ground because, as an engineer, you have an attack on the problem that gets you moving instead of contemplating whatifs of symbolic sci-fi.

There is also the question if ML is not simply moving us into a ditch of optimum locality. I think it does.

I'd love to see symbolic AI take a foothold only to have it explain back its rationalizations (which ML can't).


I have this hypothesis that, once the field of ML stabilizes around a mature industry, we'll start seeing people using symbolic tools to generate explanations of the concepts learnt by the deep learning networks.


Aren't Mathematica and automated proving systems succesful cases where symbolic AI happen?


I have a long background in Ai (robotics, PDP, expert systems, symbolic math, vision, planning).

There appear to be two classes of knowledge. Pattern knowledge, such as riding a bicycle, which we tend to learn in ways similar to the current machine learning trend. In some ways, this is "deductive knowledge". On the other hand, Explicit knowledge, such a learning to reason about proofs, which we tend to learn by teaching is symbolic. In some ways, this is "inductive knowledge.

The current machine learning trend leans heavily on Pattern knowledge. I don't believe it will extend into the Explicit knowledge domain. I fear that once this distinction becomes important it will be seen as a "limit of AI", leading to yet another AI winter. I tried to bring this up in the Open AI Gym (https://gym.openai.com/) but it went nowhere.

My experience leads me to hold the very unpopular opinion that AI requires a self-modifying system. Computers differ from calculators because they can modify their own behavior. I'm of the opinion that there is an even deeper kind of self-modification that is important for general AI. The physical realization of this in animals is due to the ability to grow new brain connections based on experience. One side-effect is that two identical self-modifying systems placed in different contexts will evolve differently. (A trivial example would be the notion of a "table" which is a wood structure to one system and a spreadsheet to the other system). Since they evolve different symbolic meanings they can't "copy their knowledge" but have to transfer it by "teaching".

Self-modification allows for adaptation based on internal feedback rather than external patterns (e.g. imagination). It allows a kind of hardware implementation of "genetic algorithms" (https://en.wikipedia.org/wiki/Genetic_algorithm). It allows "Explicit knowledge" to be "compiled" into "Pattern knowledge". This effect can be seen when you learn a skill like music or knitting. After being taught a manual skill you eventually "get it into your fingers", likely by self-modification, growing neural pathways.

Of all of the approches I've seen I think Jeff Hawkins of Numenta (https://www.amazon.com/Intelligence-Understanding-Creation-I...) is on the right track. However, he needs to extend his theories to handle self-modification in order to get past the "pattern knowledge" behavior.


>> "Pattern knowledge, such as riding a bicycle, which we tend to learn in ways similar to the current machine learning trend. In some ways, this is "deductive knowledge". "

Deduction is given a rule and cause, find (deduce) the effect, whereas Induction is given cause and effect, induce the rule. Isn't machine learning more inductive (given observations and outcome, induce the decision function)?


We aren’t planning on serious research in the space but it is becoming increasingly obvious that an expert system is the right approach for our business going forward fwiw


There already has been, though it's nascent. Check the proceedings of AAAI 2019 or any of the more recent non-NIPS conferences for details.


Depends whether it could be used to improve performance on existing tasks.


Like neural nets managing symbolic systems managing neural nets, or something?




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: