Parameter-Efficient Abstractive Question Answering over Tables or Text

A long-term ambition of information seeking QA systems is to reason over multi-modal contexts and generate natural answers to user queries. Today, memory intensive pre-trained language models are adapted to downstream tasks such as QA by fine-tuning the model on QA data in a specific modality like u...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Pal, Vaishali, Kanoulas, Evangelos, de Rijke, Maarten
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 07.04.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A long-term ambition of information seeking QA systems is to reason over multi-modal contexts and generate natural answers to user queries. Today, memory intensive pre-trained language models are adapted to downstream tasks such as QA by fine-tuning the model on QA data in a specific modality like unstructured text or structured tables. To avoid training such memory-hungry models while utilizing a uniform architecture for each modality, parameter-efficient adapters add and train small task-specific bottle-neck layers between transformer layers. In this work, we study parameter-efficient abstractive QA in encoder-decoder models over structured tabular data and unstructured textual data using only 1.5% additional parameters for each modality. We also ablate over adapter layers in both encoder and decoder modules to study the efficiency-performance trade-off and demonstrate that reducing additional trainable parameters down to 0.7%-1.0% leads to comparable results. Our models out-perform current state-of-the-art models on tabular QA datasets such as Tablesum and FeTaQA, and achieve comparable performance on a textual QA dataset such as NarrativeQA using significantly less trainable parameters than fine-tuning.
ISSN:2331-8422
DOI:10.48550/arxiv.2204.03357