DeFINE: DEep Factorized INput Token Embeddings for Neural Sequence Modeling

For sequence models with large vocabularies, a majority of network parameters lie in the input and output layers. In this work, we describe a new method, DeFINE, for learning deep token representations efficiently. Our architecture uses a hierarchical structure with novel skip-connections which allo...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Mehta, Sachin, Koncel-Kedziorski, Rik, Rastegari, Mohammad, Hannaneh Hajishirzi
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 06.02.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:For sequence models with large vocabularies, a majority of network parameters lie in the input and output layers. In this work, we describe a new method, DeFINE, for learning deep token representations efficiently. Our architecture uses a hierarchical structure with novel skip-connections which allows for the use of low dimensional input and output layers, reducing total parameters and training time while delivering similar or better performance versus existing methods. DeFINE can be incorporated easily in new or existing sequence models. Compared to state-of-the-art methods including adaptive input representations, this technique results in a 6% to 20% drop in perplexity. On WikiText-103, DeFINE reduces the total parameters of Transformer-XL by half with minimal impact on performance. On the Penn Treebank, DeFINE improves AWD-LSTM by 4 points with a 17% reduction in parameters, achieving comparable performance to state-of-the-art methods with fewer parameters. For machine translation, DeFINE improves the efficiency of the Transformer model by about 1.4 times while delivering similar performance.
ISSN:2331-8422
DOI:10.48550/arxiv.1911.12385