Few-Shot Out-of-Domain Transfer Learning of Natural Language Explanations in a Label-Abundant Setup
Training a model to provide natural language explanations (NLEs) for its predictions usually requires the acquisition of task-specific NLEs, which is time- and resource-consuming. A potential solution is the few-shot out-of-domain transfer of NLEs from a parent task with many NLEs to a child task. I...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
12.12.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Training a model to provide natural language explanations (NLEs) for its
predictions usually requires the acquisition of task-specific NLEs, which is
time- and resource-consuming. A potential solution is the few-shot
out-of-domain transfer of NLEs from a parent task with many NLEs to a child
task. In this work, we examine the setup in which the child task has few NLEs
but abundant labels. We establish four few-shot transfer learning methods that
cover the possible fine-tuning combinations of the labels and NLEs for the
parent and child tasks. We transfer explainability from a large natural
language inference dataset (e-SNLI) separately to two child tasks: (1) hard
cases of pronoun resolution, where we introduce the small-e-WinoGrande dataset
of NLEs on top of the WinoGrande dataset, and (2)~commonsense validation
(ComVE). Our results demonstrate that the parent task helps with NLE generation
and we establish the best methods for this setup. |
---|---|
DOI: | 10.48550/arxiv.2112.06204 |