Neural Reasoning Networks: Efficient Interpretable Neural Networks With Automatic Textual Explanations

Recent advances in machine learning have led to a surge in adoption of neural networks for various tasks, but lack of interpretability remains an issue for many others in which an understanding of the features influencing the prediction is necessary to ensure fairness, safety, and legal compliance....

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Carrow, Stephen, Kyle Harper Erwin, Vilenskaia, Olga, Ram, Parikshit, Klinger, Tim, Khan, Naweed Aghmad, Makondo, Ndivhuwo, Gray, Alexander
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 10.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recent advances in machine learning have led to a surge in adoption of neural networks for various tasks, but lack of interpretability remains an issue for many others in which an understanding of the features influencing the prediction is necessary to ensure fairness, safety, and legal compliance. In this paper we consider one class of such tasks, tabular dataset classification, and propose a novel neuro-symbolic architecture, Neural Reasoning Networks (NRN), that is scalable and generates logically sound textual explanations for its predictions. NRNs are connected layers of logical neurons which implement a form of real valued logic. A training algorithm (R-NRN) learns the weights of the network as usual using gradient descent optimization with backprop, but also learns the network structure itself using a bandit-based optimization. Both are implemented in an extension to PyTorch (https://github.com/IBM/torchlogic) that takes full advantage of GPU scaling and batched training. Evaluation on a diverse set of 22 open-source datasets for tabular classification demonstrates performance (measured by ROC AUC) which improves over multi-layer perceptron (MLP) and is statistically similar to other state-of-the-art approaches such as Random Forest, XGBoost and Gradient Boosted Trees, while offering 43% faster training and a more than 2 orders of magnitude reduction in the number of parameters required, on average. Furthermore, R-NRN explanations are shorter than the compared approaches while producing more accurate feature importance scores.
ISSN:2331-8422