Multi-Source Domain Generalization Using Domain Attributes for Recurrent Neural Network Language Models

Most conventional multi-source domain adaptation techniques for recurrent neural network language models (RNNLMs) are domain-centric. In these approaches, each domain is considered independently and this makes it difficult to apply the models to completely unseen target domains that are unobservable...

Full description

Saved in:
Bibliographic Details
Published inIEICE Transactions on Information and Systems Vol. E105.D; no. 1; pp. 150 - 160
Main Authors TAWARA, Naohiro, OGAWA, Atsunori, IWATA, Tomoharu, ASHIKAWA, Hiroto, KOBAYASHI, Tetsunori, OGAWA, Tetsuji
Format Journal Article
LanguageEnglish
Published Tokyo The Institute of Electronics, Information and Communication Engineers 01.01.2022
Japan Science and Technology Agency
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most conventional multi-source domain adaptation techniques for recurrent neural network language models (RNNLMs) are domain-centric. In these approaches, each domain is considered independently and this makes it difficult to apply the models to completely unseen target domains that are unobservable during training. Instead, our study exploits domain attributes, which represent common knowledge among such different domains as dialects, types of wordings, styles, and topics, to achieve domain generalization that can robustly represent unseen target domains by combining the domain attributes. To achieve attribute-based domain generalization system in language modeling, we introduce domain attribute-based experts to a multi-stream RNNLM called recurrent adaptive mixture model (RADMM) instead of domain-based experts. In the proposed system, a long short-term memory is independently trained on each domain attribute as an expert model. Then by integrating the outputs from all the experts in response to the context-dependent weight of the domain attributes of the current input, we predict the subsequent words in the unseen target domain and exploit the specific knowledge of each domain attribute. To demonstrate the effectiveness of our proposed domain attributes-centric language model, we experimentally compared the proposed model with conventional domain-centric language model by using texts taken from multiple domains including different writing styles, topics, dialects, and types of wordings. The experimental results demonstrated that lower perplexity can be achieved using domain attributes.
ISSN:0916-8532
1745-1361
DOI:10.1587/transinf.2021EDP7081