Hacker News new | past | comments | ask | show | jobs | submit login
LogicENN: A Neural Based Knowledge Graphs Embedding Model with Logical Rules (arxiv.org)
79 points by sel1 27 days ago | hide | past | web | favorite | 10 comments

Knowledge Representation: Knowledge has played a pivotal role in the development of civilizations. The 21st century has become the century of explosion of humans knowledge thanks to the development of sophisticated communication systems. This indeed provides a great opportunity in front of the human being. However, proper representation and management of such a huge amount of knowledge is a challenging task. The other challenge is to reason over such a knowledge to get new knowledge. Knowledge Graph (KG) properly addresses the challenge of representation of the knowledge obtained by human. The KG represents knowledge in the form of multi-relational graph where entities are connected by different relations. This indeed more consistent with the essence of real world knowledge where the things are connected to each others.

Learning over Symbolic Data: Knowledge graphs mostly contain symbolic data which are connected. Learning over such a data is indeed a challenging task in Machine Learning (ML) where ML models mostly work on vectors. Knowledge Graph Embedding properly addresses this challenge. The core idea is that there is an equivalent vector space for the symbolic KG space that follows the underlying structure. KGE aims to map symbolic KG to a vector space.

Neural Based Embedding: Neural Networks (NNs) have been widely used to learn over data represented in the form of vectors. NNs can be used to learn over symbolic connected data (e.g., KG). They can be used to learn knowledge graph representation and provide a mapping between symbolic KG space and the corresponding vector space. In other words, the NNs embeds a symbolic KG to a vector space by preserving the underlying structure.

Reasoning by Learning:When the Neural Network maps the KG to a vector space and learns the KG representation, it can do reasoning: the NN gets a triple (e.g., (Bonn, IsAcityIn, Germany)) and determines if this is true or not. This prediction is resulted by a reasoning over the KG in the vector space.

From Logic to Algebra: After embedding a KG in its corresponding vector space, each symbolic elements (entities and relations) will be assigned a vector. Moving from symbols to vectors is equivalent with moving from logic to algebra. In other words, after mapping a KG to a vector space, logical rules can be converted to their corresponding algebraic formula. More concretely, considering the target vector space provided for the KG, one can derive the formula for each logical rules. Therefore, as there is an equivalent vector for each of the symbols, there will be an equivalent algebraic formula for each of the logical rules.

Capability vs practical rule injection:

The question is that if the NNs are capable of encoding logical rules? The paper proposes a neural network and proves that the network is capable of expressing any ground truth over encoded rules in the KG. The paper focuses on one class NNs as well as one class of logical rules. Investigation of capability of different class of NNs in encoding different class of logical rules is indeed an important problem which can open a new window in front of ML.

When one is sure that the KGE model has enough capability to encode a class of logical rules, deriving a formula for each of logical rules is essential. The derived formula can be used to guide the learning process.

The paper reported that injection of rules in the learning process significantly improves performance the learner (KGE model). Therefore, the prior knowledge which is encoded as logical rules significantly improves the performance of the NN.

How are people using knowledge graphs at companies? Immediate examples that come to mind are product categorisation and perhaps trivia questions. What else?

A variety of ways actually. A common use case is custom built ontologies and knowledge graphs for linking, and annotating industrial data. Another interesting way is to impart a degree of common sense to dialogue/qa systems. Here's an excellent blog post on the topic - https://medium.com/@mgalkin/the-mushroom-effect-or-why-you-n...

Thanks for linking this paper. I have professional experience working with both knowledge graphs (as a contractor at Google) and deep learning (Capital One). Adding logical rules as prior knowledge (an analog might be structure in our brains that we are born with, not learned) with KG embedding is a really interesting idea.

Thanks for the comment, Developing a learning model which is more close to the learning done by human brain is indeed an interesting problem. That there might be a prior knowledge in brain which is not learned, but affects (guides) the learning process, inference and reasoning is really interesting. The intuition is that the real world is a big multi-modal knowledge graph with an underlying ontology, axioms and rules (e.g., principals in classic mechanics etc). With this intuition, humans learn new facts from the big real world multi modal knowledge graph by incorporating their (Aristotelian) senses as well as five (inward/outward) wits. One prior knowledge might be intrinsic. The nice point is that AI experts can map the knowledge graph and its elements (axioms, rules, symbols, facts etc) to a target vector space and find equivalent of the elements for the vector space (e.g., logical formula are mapped to their corresponding equivalent algebraic formula). Actually, this is multidisciplinary topic and I would invite people from different backgrounds and fields to add comments. It would be appreciated

Applications are open for YC Winter 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact