Hi!
Please see below a third thread (3/3) about a discussion to use a multi-partite graph as a starting point for knowledge prediction.
###
The knowledge structure is similar to Google's Knowledge graph, with advantages:
- you get correlations for all topics, even the least popular
- you get traverse the knowledge base
- you can visualize it
Correlations are not computed based on searches of users: collective knowledge is organized in a multi-partite graph.
For a demo, restricted to the mind-map of wikipedia, see the mobile app:
http://learn.xdiscovery.com
or maps created with it at:
http://www.xdiscovery.com/en/atlas
I would like to use a backbone of knowledge as seminal starting point to study how knowledge is evolving, how a society is making sense of reality, exploiting semantic trees.
Which models in artificial intelligence / neuroscience / formation of memories are focused on semantic trees, to explain why things are connected?
###
Please see below additional info to frame my question.
Instead of training machines to identify knowledge correlations against users' searches or popularity of topics, I would like to * explain * correlations.
My proposal is to use an "analogic" approach: as a starting point, to adopt a mind-map of collective knowledge reflecting an average of what people think about subjects; then to iterate AI on top of it.
As example, I can already query pathways between topics (e.g. tell me why "Karl Marx" and "Russia" are linked, or "Google" and "Robotics" are linked ..).
I see a potential for interfacing with natural language and make crazy query like, ehi, tell me about "financial crises 2008" and you get an overview of the * subject * : semantic trees telling about an argument, rather than web-pages.
Currently in San Francisco and I'd love to have a deeper chat, vis-à-vis!!