That said I remember reading about Cog when I was a freshman (around 1994), and to the day it doesn't seem to deliver the promise, despite many million of DoD money. The idea was that a reasonable featured AI will emerge when Cog ontology reaches critical mass, but it just doesn't seem to be happening yet.
EDIT: disregard this, a case of my confusion among three-letter names.
The same "problem" with AGI. I like AGI, but a very practical implementation seems away.
I probably should qualify the above by saying that lower level and domain specific ontologies might be manageble. That is where a lot of work has been done. The problem is moving up the hierarchy of abstraction where the ontology meets encoding of common-sense items.
Cyc has tried for years to encode common sense rules and terms in hopes that it would approximate human common sense. But problem I see with that approach is that humans (those entering the rules in the system for example) themselves don't know how common sense is encoded. They just know it, but it is hard to formalize that knowledge and it is very much personalized.
- An API to manipulate an extended hypergraph of terms and relationships, dubbed the "AtomSpace".
- An implementation of a probabilistic reasoning engine based on probabalistic logic networks (PLN).
- A probabilistic genetic program evolver called Meta-Optimizing Semantic Evolutionary Search, or MOSES, originally developed by Moshe Looks.
- An attention allocation system based on economic theory.
- An embodiment system for interaction and learning within virtual worlds.
- A natural language input system consisting of Link Grammar and RelEx, both of which employ AtomSpace-like representations for semantic and syntactic relations.
- A natural language generation system called SegSim, with implementations NLGen and NLGen2.
I'll keep the comment anyway, to not decapitate the discussion now :)