Show simple item record

dc.contributor.authorGray, Alexander
dc.date.accessioned2021-01-26T22:32:19Z
dc.date.available2021-01-26T22:32:19Z
dc.date.issued2021-01-15
dc.identifier.urihttp://hdl.handle.net/1853/64243
dc.descriptionPresented online on January 15, 2021 at 2:00 p.m.en_US
dc.descriptionAlexander Gray serves as VP of Foundations of AI at IBM, and currently leads a global research program in Neuro-Symbolic AI at IBM. His current interests generally revolve around the injection of non-mainstream ideas into ML/AI to attempt to break through long-standing bottlenecks of the field.en_US
dc.descriptionRuntime: 59:55 minutesen_US
dc.description.abstractRecently there has been renewed interest in the long-standing goal of somehow unifying the capabilities of both statistical AI (learning and prediction) and symbolic AI (knowledge representation and reasoning). We introduce Logical Neural Networks, a new neuro-symbolic framework which identifies and leverages a 1-to-1 correspondence between an artificial neuron and a logic gate in a weighted form of real-valued logic. With a few key modifications of the standard modern neural network, we construct a model which performs the equivalent of logical inference rules such as modus ponens within the message-passing paradigm of neural networks, and utilizes a new form of loss, contradiction loss, which maximizes logical consistency in the face of imperfect and inconsistent knowledge. The result differs significantly from other neuro-symbolic ideas in that 1) the model is fully disentangled and understandable since every neuron has a meaning, 2) the model can perform both classical logical deduction and its real-valued generalization (which allows for the representation and propagation of uncertainty) exactly, as special cases, as opposed to approximately as in nearly all other approaches, and 3) the model is compositional and modular, allowing for fully reusable knowledge across talks. The framework has already enabled state-of-the-art results in several problems, including question answering.en_US
dc.format.extent59:55 minutes
dc.language.isoen_USen_US
dc.relation.ispartofseriesIDEaS-AI Seminar Seriesen_US
dc.subjectArtificial intelligence (AI)en_US
dc.subjectLogical Neural Networksen_US
dc.titleLogical Neural Networks: Towards Unifying Statistical and Symbolic AIen_US
dc.typeLectureen_US
dc.typeVideoen_US
dc.contributor.corporatenameGeorgia Institute of Technology. Institute for Data Engineering and Scienceen_US
dc.contributor.corporatenameIBM Researchen_US


Files in this item

Thumbnail
Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record