Hacker News new | past | comments | ask | show | jobs | submit login

I'm very curious to know where and to what extent the symbolic approaches of the past (and present) meet with ML?

If you had a good answer to that, you'd probably be well on your way to a Ph.D., if not a Turing Award. The question of symbolic/sub-symbolic integration has been a big outstanding question in the AI world for a very long time now. I don't think many people were actively working on it for quite a while, but it seems like there has been at least a small uptick in interest in that idea recently. My personal belief is that this kind of integration will be essential, at least in the short term, to achieving something like what we might actually call AGI. And while I'm hardly alone in thinking this, this position is by no means universally held. There are people (Geoff Hinton among others, if memory serves correctly) who believe that "neural nets are completely sufficient".

And frankly, in the long (enough) term that might be right. Build ANN's that are sufficiently deep, sufficiently wide, and with just the right initial architecture, and maybe you get something that develops "the master algorithm" and figures it all out on its own. I think that's probably possible in principle; but my doubt about all of that is more about how realistic it is, especially over shorter time scales.

Anyway, if you're really interested in the topic, Ben Goertzel's OpenCog system includes a strong focus on symbolic/sub-symbolic integration, and borrows a lot of ideas from some well-known cognitive architecture work (LIDA, in particular).

Also, googling "symbolic / sub-symbolic integration" will turn up a ton of sites / papers / books / etc. that go into far more detail.


One book length treatment of this topic that I'm aware of (but not deeply familiar with) is this one, by Ron Sun:


I went very deep into OpenCog and finally had to concede that there just wasn't enough rigor and coordination between the compoents. Goertzel seems easily distracted by various other subjects. I realize that he has to figure out ways to fund his work, so I am not being judgemental.

In addition to symbolic and deep learning, future AI systems will most likely have a causal learning component. Judea Pearl has been working on this subject for years. http://bayes.cs.ucla.edu/jp_home.html

Good points all around. I think OpenCog has a lot of good ideas, but I won't claim that it's the "be all, end all", as of today. That said, I think to some extent the statement "there just wasn't enough rigor and coordination between the compoents" may be true exactly because that is the central challenge that still remains to be solved.

At the very least, I think reading Goertzel's books[1] and looking at OpenCog is a good introduction to the issues at hand in a general sense.

Totally agree on the causal learning thing. And that's an area that also seems to have had a resurgence of interest and activity lately.

[1]: Here I specifically mean Engineering General Intelligence, Volumes 1 & 2

Thanks for the leads on this - very excited to look further into it.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact