An Interpretable Word Sense Classifier for Human Explainable Chatbot
Explainable Artificial Intelligence (AI) based chatbot is one of the most ambitious and unsolved sectors of conversational AI. Recently, there has been a boom in the neural network architecture such as BERT and GPTs that understand the sense of such words/phrases in the sentence. However, such model...
Saved in:
Published in | Agents and Artificial Intelligence pp. 236 - 249 |
---|---|
Main Authors | , , , |
Format | Book Chapter |
Language | English |
Published |
Cham
Springer International Publishing
|
Series | Lecture Notes in Computer Science |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Explainable Artificial Intelligence (AI) based chatbot is one of the most ambitious and unsolved sectors of conversational AI. Recently, there has been a boom in the neural network architecture such as BERT and GPTs that understand the sense of such words/phrases in the sentence. However, such models fail to explain the logical reasoning behind the language understanding thereby making the base of chatbot unreliable. In this paper, we design and extend the previous TM based Word Sense Disambiguation task on complete 20 words as well as design a fully explainable word sense classifier using the Tsetlin Machine (TM) that supports the chatbot to understand the concept of the word/phrases. Our experiments show that the proposed model performs on par with the state-of-the-art accuracy on the publicly available CoarseWSD-balanced dataset. In addition, we explore in-depth how each interpretable clause of TM carries context information that can be easily explained by a human for designing a trustful chatbot. |
---|---|
ISBN: | 9783031101601 303110160X |
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-031-10161-8_13 |