Hacker News new | past | comments | ask | show | jobs | submit login

These answers are very personal to me. I joined Cycorp because Doug Lenat sold me on it being a more viable path toward something like AGI than I had suspected when I read about it. I left for a number of reasons (e.g. just to pursue other projects) but a big one was slowly coming to doubt that.

I could be sold on the idea that Cyc or something Cyc-like could be a piece of the puzzle for AGI.

I say "Cyc-like" because my personal opinion is that the actual Cyc system is struggling under 30-odd years of rapidly accruing technical debt and while it can do some impressive things, it doesn't represent the full potential of something that could be built using the lessons learned along the way.

But the longer I worked there the more I felt like the plan was basically:

1. Manually add more and more common-sense knowledge and extend the inference engine

2. ???

3. AGI!

When it comes to AI, the questions for me are basically always: what does the process by which it learns look like? Is it as powerful as human learning, and in what senses? How does it scale?

The target is something that can bootstrap: it can seek out new knowledge, creatively form its own theories and test them, and grow its own understanding of the world without its knowledge growth being entirely gated by human supervision and guidance.

The current popular approach to AI is statistical machine learning, which has improved by leaps and bounds in recent years. But when you look at it, it's still basically just more and more effective forms of supervised learning on very strictly defined tasks with pretty concrete metrics for success. Sure, we got computers to the point where they can play out billions of games of Chess or Go in a short period of time, and gradient descent algorithms to the point where they can converge to mastery of the tasks they're assigned much faster - in stopwatch time - than humans. But it's still gated almost entirely by human supervision - we have to define a pretty concrete task and set up a system to train the neural nets via billions of brute force examples.

The out-of-fashion symbolic approach behind Cyc takes a different strategy. It learns in two ways: ontologists manually enter knowledge in the form of symbolic assertions (or set up domain-specific processes to scrape things in), and then it expands on that knowledge by inferring whatever else it can given what it already knows. It's gated by the human hand in the manual knowledge acquisition step, and in the boundaries of what is strictly implied by its inference system.

In my opinion, both of those lack something necessary for AGI. It's very difficult to specify what exactly that is, but I can give some symptoms.

A real AGI is agentive in an important sense - it actively seeks out things of interest to it. And it creatively produces new conceptual schemes to test out against its experience. When a human learns to play chess, they don't reason out every possible consequence of the rules in exactly the terms they were initially described in (which is basically all Cyc can do) or sit there and memorize higher-order statistical patterns in play through billions of games of trial and error (which is basically what ML approaches do). They learn the rules, reason about them a bit while playing games to predict a few moves ahead, play enough to get a sense of some of those higher order statistical patterns and then they do a curious thing: they start inventing new concepts that aren't in the rules. They notice the board has a "center" that its important to control, they start thinking in terms of "tempo" and "openness" so-on. The end result is in some ways very similar to the result of higher-order statistical pattern recognition, but in the ML case those patterns were hammered out one tiny change at a time until they matched reality, whereas in the human there's a moment where they did something very creative and had an idea and went through a kind of phase transition where they started thinking about the game in different terms.

I don't know how to get to AI that does that. ML doesn't - it's close in some ways but doesn't really do those inductive leaps. Cyc doesn't either. I don't think it can in any way that isn't roughly equivalent to manually building a system that can inside of Cyc. Interestingly, some of Doug Lenat's early work was maybe more relevant to that problem than Cyc is.

Anyway that's my two cents. As for the second question, I have no idea. I didn't come up with anything while I worked there.




But the longer I worked there the more I felt like the plan was basically:

1. Manually add more and more common-sense knowledge and extend the inference engine

2. ???

3. AGI!

That's the same impression I had in the early days of expert systems. I once made the comment, "It's not going to work, but it's worth trying to find out why it won't work." I was thinking that rule-based inference was a dead end, but maybe somebody could reuse the knowledge base with something that works better.


Thanks for the answer!

> Interestingly, some of Doug Lenat's early work was maybe more relevant to that problem than Cyc is.

Yeah, Eurisko was really impressive, I often wondered why people don't work on that kind of stuff anymore.


While the last part of your comment is kind of messy, but this part I agree and find it interesting:

> in the ML case those patterns were hammered out one tiny change at a time until they matched reality, whereas in the human there's a moment where they did something very creative and had an idea and went through a kind of phase transition where they started thinking about the game in different terms.

Phase transition, or the "aha" moment, where things start to logically make sense. Humans have that moment. Knowledge gets crystallized in the same sense water starting to form a crystal structure. The regularity in the structure offers the ability to extrapolate, which is what current ML is known to be poor at.


Great comments by you and others here.

I was visiting MCC during the startup phase and Bobby Inman spent a little time with me. He had just hired Doug Lenat, but Lenat was not there yet. Inman was very excited to be having Lenat on board. (Inman was on my board of directors and furnished me with much of my IR&D funding for several years.)

From an outsider’s perspective, I thought that the business strategy of Open Cyc made sense, because many of us outside the company had the opportunity to experiment with it. I still have backups of the last released version, plus the RDF/OWL releases.

Personally, I think we are far from achieving AGI. We need some theoretical breakthroughs (I would bet on hybrid symbolic, deep learning, and probabilistic graph models). We have far to go, but as the Buddha said, enjoy the journey.


Thank you for this AMA, it was eye opening and made me think a lot about the organizational/tech debt barriers to creating AGI (or creating an organization that can create AGI).

I'm a ML researcher working on Deep Learning for robotics. I'm skeptical of the symbolic approach by which 1) ontologists manually enter symbolic assertions and 2) the system deduces further things from its existing ontology. My skepticism comes from a position of Slavic pessimism: we don't actually know how to formally define any object, much less ontological relationships between objects. If we let a machine use our garbage ontologies as axioms with which to prove further ontological relationships, the resulting ontology may be completely disjoint from the reality we live in. There must be a forcing function with which reality tells the system that its ontology is incorrect, and a mechanism for unwinding wrong ontologies.

I'm reminded of a quote from the Alien, Covenant movie.

Walter : When one note is off, it eventually destroys the whole symphony, David.


I am currently trying to build an AGI on my free time.

* it doesn't represent the full potential of something that could be built using the lessons learned along the way.*

the lessons learned What are those lessons? I would like to benefit from them instead to reproduce your past mistakes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: