

Modeling creativity with a semantic network of common sense - rahulrrixe
http://www.clips.ua.ac.be/pages/modeling-creativity-with-a-semantic-network-of-common-sense

======
TheOtherHobbes
> The design choice of using red and yellow colors is a credible step towards
> a solution.

But it's not creative. It's a cliche, which is the opposite of creativity.
Collecting cliches is the easiest and laziest - but also the most effective -
way to make computers appear creative.

Labelling them "common sense" doesn't stop them being cliches.

Real creativity would be imagining a new _but convincing_ trope for rocket
design. Does this happen? It happened in Hollywood about twenty years ago when
rocket engine exhausts suddenly became cyan instead of red/orange. Cyan is
basically colour shorthand for "advanced technology" which is why the trope
has become so overused in movies, for rocket exhausts and other things.

Semiotics studies this kind of thing formally. It would be good to think
computer creativity could be more than a random-access collection of semiotic
observations, with a bit of semi-random glue logic for spice.

------
tomdesmedt
Hi. As the author of the article, here’s some background information.

This article was written in 2012 as part of my PhD dissertation
([http://arxiv.org/pdf/1410.0281](http://arxiv.org/pdf/1410.0281)), which
consists of a number of computational creativity experiments and case studies,
using the Pattern toolkit for Python
([http://www.clips.uantwerpen.be/pattern](http://www.clips.uantwerpen.be/pattern)).

The article is not exhaustive, for example it does not cite Cyc or WordNet,
although WordNet is used in another experiment in the book to generate poetry.

The limitations and simplifications of each case study in the book – of which
I’m well aware – are outlined in the conclusion of each chapter, often
touching on subjects such as “real AI” or “false impression of creativity” or
cliché.

The aim of my work was to bring together a lot of existing knowledge,
popularize it, and make it available in the form of an easy-to-use toolkit
(Pattern) for others to play with, explore and progress further. From there
on, computational creativity is an active and engaging domain in AI with many
open challenges that warmly welcomes new researchers.

As for the unsupervised learn() function: one could write an endless loop in
programming code that crawls for “noun1 is adjective1” statements, then for
each adjective1 crawls for “noun2 is adjective1” statements, then for each
noun2 crawls for “noun2 is adjective2” statements, and so on. The problem
would be to automatically filter out uninteresting relations (there will be
many), which leads to a creativity-problem-inside-a-creativity-problem.

------
ansible
It is interesting they are using some context for each bit of knowledge,
though it is rather too simple.

In my view, this is one of the under-appreciated areas of knowledge
engineering. The exact context underpins the truth of every fact. "The iPhone
is the best selling smartphone." is only true for certain places, and certain
times. It is definitely not true before 2007, because iPhones didn't exist for
sale yet. It may not be true is some country where it isn't even available.

Other facts like "Using marijuana is illegal." are also dependent on context.
In some places in the United States, for example, that may be true and false
simultaneously (true in the federal law context, false in the state law
context).

And that's just in the real world. We will also want general reasoning systems
to be able to operate in hypothetical, historical, or even fictional contexts.

------
nl
At first I started reading this as though it were a paper, and was surprised
to realise that it doesn't reference Microsoft Probase[1], which is probably
the leading concept-relation knowledge base around. Nor (as noted below) Cyc,
WordNet or NELL.

Actually, it's a somewhat interesting tutorial on how to implement graphs like
these in Python.

[1] [http://research.microsoft.com/en-
us/projects/probase/](http://research.microsoft.com/en-us/projects/probase/)

------
bra-ket
>"Knowledge, in the form of new concepts and relations in the semantic
network, must be supplied by human annotators.. We can refine the learn()
function into an unsupervised, bootstrapped learning mechanism."

I'm really interested in that second option of unsupervised concept-relation-
graph learning, are there any good pointers to prior art?

I think the problem naturally fits probabilistic graphical models but existing
PGM algorithms are way too complex to scale to real data. On the other hand
deep learning rarely goes beyond object recognition.

------
imglorp
Curiously, I did not see a nod to Doug Lenat's Cyc project. Similar idea:
encode a whole bunch of common sense semantic relations about the real world,
and then you can explore the connections. There's an open source version still
available.

[http://opencyc.org](http://opencyc.org)

[http://cyc.com](http://cyc.com)

------
johanneskanybal
You know it's time to go outside when unbounded force graphs flying all over
the place is hysterically funny.

