Focused Negative Sampling for Increased Discriminative Power in Tsetlin Machines

Tsetlin Machines learn from input data by creating patterns in propositional logical, using the literals available in the data. These patterns vote for the classes in a classification task. Despite their simplistic premise, Tsetlin machine (TM)s have been performing at with other popular machine lea...

Full description

Saved in:
Bibliographic Details
Published in2022 International Symposium on the Tsetlin Machine (ISTM) pp. 73 - 80
Main Authors Glimsdal, Sondre, Saha, Rupsa, Bhattarai, Bimal, Giri, Charul, Sharma, Jivitesh, Tunheim, Svein Anders, Yadav, Rohan Kumar
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.06.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Tsetlin Machines learn from input data by creating patterns in propositional logical, using the literals available in the data. These patterns vote for the classes in a classification task. Despite their simplistic premise, Tsetlin machine (TM)s have been performing at with other popular machine learning methods across various benchmarks. Not only accuracy, TMs also perform well in terms of energy efficiency and learning speed. The general TM scheme works best when there is sufficient discriminatory information available between two classes. In this paper, we explore the use of focused negative sampling (FNS) to discriminate between classes which are not easily distinguishable from each other. We carry out experiments across diverse classification tasks ranging over natural language processing, image processing, reinforcement learning to show that this approach forces the TM to arrive at patterns that can successfully tell apart two classes that are correlated. Further, we show that the proposed method achieves accuracy comparable to a vanilla Tsetlin Machine approach but in approximately 42% less number of epochs on average.
DOI:10.1109/ISTM54910.2022.00021