Learning from the Best: Rationalizing Prediction by Adversarial Information Calibration
Proceedings of the 35th AAAI Conference on Artificial Intelligence, 2021 Explaining the predictions of AI models is paramount in safety-critical applications, such as in legal or medical domains. One form of explanation for a prediction is an extractive rationale, i.e., a subset of features of an in...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
16.12.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Proceedings of the 35th AAAI Conference on Artificial
Intelligence, 2021 Explaining the predictions of AI models is paramount in safety-critical
applications, such as in legal or medical domains. One form of explanation for
a prediction is an extractive rationale, i.e., a subset of features of an
instance that lead the model to give its prediction on the instance. Previous
works on generating extractive rationales usually employ a two-phase model: a
selector that selects the most important features (i.e., the rationale)
followed by a predictor that makes the prediction based exclusively on the
selected features. One disadvantage of these works is that the main signal for
learning to select features comes from the comparison of the answers given by
the predictor and the ground-truth answers. In this work, we propose to squeeze
more information from the predictor via an information calibration method. More
precisely, we train two models jointly: one is a typical neural model that
solves the task at hand in an accurate but black-box manner, and the other is a
selector-predictor model that additionally produces a rationale for its
prediction. The first model is used as a guide to the second model. We use an
adversarial-based technique to calibrate the information extracted by the two
models such that the difference between them is an indicator of the missed or
over-selected features. In addition, for natural language tasks, we propose to
use a language-model-based regularizer to encourage the extraction of fluent
rationales. Experimental results on a sentiment analysis task as well as on
three tasks from the legal domain show the effectiveness of our approach to
rationale extraction. |
---|---|
DOI: | 10.48550/arxiv.2012.08884 |