Alternative Formulations of Decision Rule Learning from Neural Networks

This paper extends recent work on decision rule learning from neural networks for tabular data classification. We propose alternative formulations to trainable Boolean logic operators as neurons with continuous weights, including trainable NAND neurons. These alternative formulations provide uniform...

Full description

Saved in:
Bibliographic Details
Published inMachine learning and knowledge extraction Vol. 5; no. 3; pp. 937 - 956
Main Authors Qiao, Litao, Wang, Weijia, Lin, Bill
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.09.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper extends recent work on decision rule learning from neural networks for tabular data classification. We propose alternative formulations to trainable Boolean logic operators as neurons with continuous weights, including trainable NAND neurons. These alternative formulations provide uniform treatments to different trainable logic neurons so that they can be uniformly trained, which enables, for example, the direct application of existing sparsity-promoting neural net training techniques like reweighted L1 regularization to derive sparse networks that translate to simpler rules. In addition, we present an alternative network architecture based on trainable NAND neurons by applying De Morgan’s law to realize a NAND-NAND network instead of an AND-OR network, both of which can be readily mapped to decision rule sets. Our experimental results show that these alternative formulations can also generate accurate decision rule sets that achieve state-of-the-art performance in terms of accuracy in tabular learning applications.
ISSN:2504-4990
2504-4990
DOI:10.3390/make5030049