MaNtLE: Model-agnostic Natural Language Explainer

Understanding the internal reasoning behind the predictions of machine learning systems is increasingly vital, given their rising adoption and acceptance. While previous approaches, such as LIME, generate algorithmic explanations by attributing importance to input features for individual examples, r...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Menon, Rakesh R, Zaman, Kerem, Srivastava, Shashank
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 22.05.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Understanding the internal reasoning behind the predictions of machine learning systems is increasingly vital, given their rising adoption and acceptance. While previous approaches, such as LIME, generate algorithmic explanations by attributing importance to input features for individual examples, recent research indicates that practitioners prefer examining language explanations that explain sub-groups of examples. In this paper, we introduce MaNtLE, a model-agnostic natural language explainer that analyzes multiple classifier predictions and generates faithful natural language explanations of classifier rationale for structured classification tasks. MaNtLE uses multi-task training on thousands of synthetic classification tasks to generate faithful explanations. Simulated user studies indicate that, on average, MaNtLE-generated explanations are at least 11% more faithful compared to LIME and Anchors explanations across three tasks. Human evaluations demonstrate that users can better predict model behavior using explanations from MaNtLE compared to other techniques
ISSN:2331-8422