Differentiable neural logic networks and their application onto inductive logic programming
MetadataShow full item record
Despite the impressive performance of Deep Neural Networks (DNNs), they usually lack the explanatory power of disciplines such as logic programming. Even though they can learn to solve very difficult problems, the learning is usually implicit and it is very difficult, if not impossible, to interpret the underlying explanations that is implicitly stored in the weights of the neural network models. On the other hand, standard logic programming is usually limited in scope and application compared to the DNNs. The objective of this dissertation is to bridge the gap between these two disciplines by presenting a novel paradigm for learning algorithmic and discrete tasks via neural networks. This novel approach, uses the differentiable neural network to design interpretable and explanatory models that can learn and represent Boolean functions efficiently. We will investigate the application of these differentiable Neural Logic (dNL) networks in disciplines such as Inductive Logic Programming, Relational Reinforcement Learning, as well as in discrete algorithmic tasks such as decoding LDPC codes over Binary erasure Channels. In particular, in this dissertation we reformulate the ILP as a differentiable neural network by exploiting the explanatory power of dNL networks and we show that the proposed dNL-ILP outperforms the current state of the art ILP solvers in a variety of benchmark tasks. We further show that the proposed differentiable ILP solver can be effectively combined with the standard deep learning techniques to formulate a relational reinforcement learning framework. Via experiments, we demonstrate that the proposed deep relational policy learning framework can incorporate human expertise to learn efficient policies directly from images and outperforms the traditional RRL systems in some tasks.