
Douglas Lenat's Cyc is now being commercialized - cedricr
https://www.technologyreview.com/s/600984/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work/
======
DonHopkins
Marvin Minsky said "We need common-sense knowledge – and programs that can use
it. Common sense computing needs several ways of representing knowledge. It is
harder to make a computer housekeeper than a computer chess-player, because
the housekeeper must deal with a wider range of situations." [1]

He named Douglas Lenat as one of the ten or so people working on common sense
(at the time of the interview in 1998), and said the best system based on
common sense is CYC. But he called for proprietary systems not to keep the
data a secret, and to distribute copies, so they can evolve and get new ideas,
and because we must understand how they work.

Sabbatini: Why there are no computers already working with common sense
knowledge ?

Minsky: There are very few people working with common sense problems in
Artificial Intelligence. I know of no more than five people, so probably there
are about ten of them out there. Who are these people ? There’s John McCarthy,
at Stanford University, who was the first to formalize common sense using
logics. He has a very interesting web page. Then, there is Harry Sloaman, from
the University of Edinburgh, who’s probably the best philosopher in the world
working on Artificial Intelligence, with the exception of Daniel Dennett, but
he knows more about computers. Then there’s me, of course. Another person
working on a strong common-sense project is Douglas Lenat, who directs the CYC
project in Austin. Finally, Douglas Hofstadter, who wrote many books about the
mind, artificial intelligence, etc., is working on similar problems.

We talk only to each other and no one else is interested. There is something
wrong with computer sciences.

Sabbatini: Is there any AI software that uses the common sense approach ?

Minsky: As I said, the best system based on common sense is CYC, developed by
Doug Lenat, a brilliant guy, but he set up a company, CYCorp, and is
developing it as a proprietary system. Many computer scientists have a good
idea and then made it a secret and start making proprietary systems. They
should distribute copies of their system to graduate systems, so that they
could evolve and get new ideas. We must understand how they work.

[1]
[http://www.cerebromente.org.br/n07/opiniao/minsky/minsky_i.h...](http://www.cerebromente.org.br/n07/opiniao/minsky/minsky_i.htm)

~~~
AnimalMuppet
> We talk only to each other and no one else is interested.

OK.

> There is something wrong with computer sciences.

Or there is something wrong with you (Minsky). If you're brilliant, and the
rest of the world doesn't follow you, it doesn't mean that there's something
wrong with them. It may simply be that you are brilliant and wrong.

~~~
DonHopkins
Do you mean {inclusive or exclusive} "Or"? I'd say there's something wrong
with computer sciences, and Minsky was brilliant, and right about some things,
and wrong about other things.

>He [Aaron Sloman, one of the small group of "each other" who talk to each
other] disagrees with all of these on some topics, while agreeing on others.

~~~
AnimalMuppet
I meant exclusive or. I was getting at the arrogance: "Out of all the AI
people, only the 5 of us talk to each other. There must be something wrong
with the whole field, because they can't see how right we are!"

The arrogance - that "we" _clearly_ are right, so "they" _clearly must be_
wrong - grates on me. Minsky may in fact be right, but he should at least have
the humility to see that, in a difference of opinion between the few and the
many, it is at least _possible_ that the many are right...

~~~
ScottBurson
> The arrogance - that "we" clearly are right, so "they" clearly must be wrong
> - grates on me.

I don't think he meant it that way. He was well aware he didn't have all the
answers. What I believe he was talking about was not the answers but the
questions: which ones are people spending their time on? I think he's saying
that the questions that most people in AI are spending their time on are not
going to give us strong AI. Is that such a controversial claim? I expect most
people in the field would agree with it.

~~~
DonHopkins
I agree that he didn't mean it in an arrogant way, didn't think he had all the
answers, and was asking big questions. He was all about integrating multiple
methods, including commonsense knowledge like CYC. But it's hard to get
commonsense knowledge methods funded by the current "benefactors of AI".

Here is something he said to me in April 2009 in a discussion about
educational software for the OLPC:

Marvin Minsky: "I've been unsuccessful at getting support for a major project
to build the architecture proposed in "The Emotion Machine." The idea is to
make an AI that can use multiple methods and commonsense knowledge--so that
whenever it gets stuck, it can try another approach. The trouble is that most
funding has come under the control of statistical and logical practitioners,
or people who think we need to solve low-level problems before we can deal
with human-level ones."

Maybe (I'll venture a wild guess) it's just that investing in statistical AI
research currently makes more financial sense for the goals of the advertising
industry that's funding most of the research these days... You're the product,
and all that.

------
nickpsecurity
Been a while since I heard about my once-favorite project aiming to imitate
common sense. I think I even contributed to its knowledge base a bit. Broken
memory is unclear there. I loved that Lenat was one of the few to see (a) the
need for common sense representation, (b) many people would need to train it,
and (c) good algorithms to integrate it with other things. The part I strongly
criticized was locking it up in proprietary fashion: worst thing you can do
for stuff needing this much training data.

Good to see it's being commercialized... again? Swore he had a company.
Anyway, probably the most valuable thing is the knowledge base they built. It
was structured, curated, and very general. It would be great if AI researchers
working on different architectures, including adaptive NN's, re-encoded and
used that knowledge base. Might speed up training and catch blind spots w/
common sense checks.

Note to other researchers: it would be worth the effort to re-create a similar
knowledge base more open to public but with careful moderations. Make sure the
knowledge base and decent engine are open source. Gotta be for best results
here.

~~~
joe_the_user
It seems terrible that such a project would lock up all that knowledge in a
proprietary form.

Fortunately, my scan of their website seems to indicate they have released
their ontologies, their under a creative commons license.

[http://www.cyc.com/platform/opencyc/](http://www.cyc.com/platform/opencyc/)
[http://www.cyc.com/documentation/opencyc-
license/](http://www.cyc.com/documentation/opencyc-license/)

~~~
catpolice
OpenCyc is only a fraction of the ontology, unfortunately. There's a lot of
internal desire to update and expand OpenCyc, but my understanding is that at
present the company hasn't secured funding that they're really allowed to use
for that purpose.

~~~
nickpsecurity
Oh no! I take it back! We're still missing the knowledge base we need. Least
OpenCyc might be a nice start on it.

------
bgribble
It's odd to speak of Cyc just now being commercialized -- Cycorp has been in
business using Cyc as its core tech for a long time. Military contracting,
among other stuff.

~~~
aab0
Which makes one wonder what exactly 'Lucid' is doing different from Cycorp Inc
(1995, described in WP article for Cyc), and is exactly what the TR article
doesn't cover. /sigh

------
nikolay
Have some respect and don't call him "Doug", please! He's always been "Douglas
Lenat"! Although questioned, Eurisko [0] (circa 1976) to me is a bigger AI
achievement than the much fanfared AlphaGo! I have great respect for Douglas
Lenat!

[0]:
[https://en.wikipedia.org/wiki/Eurisko](https://en.wikipedia.org/wiki/Eurisko)

~~~
nikolay
Wow! Just found this [0]!

[0]:
[http://lesswrong.com/lw/10g/lets_reimplement_eurisko/](http://lesswrong.com/lw/10g/lets_reimplement_eurisko/)

~~~
nickpsecurity
The one person who had the papers said what was in them didn't tell them
enough to even understand how to implement the main loop. It was too vague.
So, no detailed papers, no source, good results that might have been done by
hand, steady funding for decades, and a commercial spin-off. One commenter
said it looks like a Xanatos Gambit:

[http://tvtropes.org/pmwiki/pmwiki.php/Main/XanatosGambit](http://tvtropes.org/pmwiki/pmwiki.php/Main/XanatosGambit)

I thought Cyc project was worth a long-term investment but other theory might
be simultaneously true.

------
dcroley
Wow. I did not realize it was time for the yearly Cyc article again. Cycorp
has been a thing for a long time, but I think history has shown that the path
Doug and Cyc have taken is not the way forward.

------
mchahn
> Cyc has been given many thousands of facts

Are thousands enough? Maybe the article misstated this.

~~~
aidenn0
The free version of cyc has about a quarter-million _terms_ alone, so they are
likely wrong.

[edit]

According to this, it has "about seven million assertions" and notes that cyc
can infer many more assertions from those.

[http://www.cyc.com/kb/](http://www.cyc.com/kb/)

~~~
cpeterso
I wonder how well Cyc could build assertions from fuzzy or only semi-
trustworthy data from sources like Wikipedia.

~~~
aidenn0
My guess is quite poorly. Remember the assertion that all humans have two arms
and two legs isn't even 100% true, which is one reason of many why the
majority of the AI field abandoned the formal logic approach for statistical
methods.

The other side of the story would be that the majority of the AI field didn't
want to spend 30 years formalizing the large body of general-purpose
knowledge.

------
mark_l_watson
I have played with OpenCYC.org for years. It hasn't been updated since 2012
but version 4 is still interesting.

After seeing the utility of Google's Knowledge Graph, I wish there were a free
open source project to combine all the public data sources like OpenCYC,
DBPedia, the Freebase dumps in MediaPedia, etc.

------
tkosan
Douglas Lenat recently gave the following talk at CMU about how Cyc works and
its current capabilities:
[https://www.youtube.com/watch?v=4mv0nCS2mik](https://www.youtube.com/watch?v=4mv0nCS2mik)

------
lcall
fwiw: A related project but with a different, I hope more complete, vision for
storing ~"any/all knowledge": [http://onemodel.org](http://onemodel.org) .
(AGPL)

------
catpolice
I'll say this much: Cycorp is an interesting place to work.

------
100ideas
There are some 3d-printable / 2d-laser-cutable examples of auxetic materials
on thingiverse[1].

The creator describes two interesting mechanical properties his parts exhibit:

> synclastic bending and auxetic behavior. Synclastic materials have the
> fascinating ability to assume compound curvature along two (often
> orthogonal) directions. One can wrap a sphere easily in a synclastic
> material without folding it whereas attempting the same with an anticlastic
> material, such as paper, would require numerous folds. Auxetic behavior is
> found in materials with a negative Poission's ratio, which relates the
> deformation in one direction when the material is stressed in a
> perpendicular direction. When compressed in one direction, auxetic materials
> contract in the other, and when stretched, they expand. In other words, an
> auxetic nail would become narrowed as it was hammered into a board and
> expand in diameter when pulled out of the board.

[1]
[http://www.thingiverse.com/thing:289650](http://www.thingiverse.com/thing:289650)

