DA-MoE: Towards Dynamic Expert Allocation for Mixture-of-Experts Models

Transformer-based Mixture-of-Experts (MoE) models have been driving several recent technological advancements in Natural Language Processing (NLP). These MoE models adopt a router mechanism to determine which experts to activate for routing input tokens. However, existing router mechanisms allocate...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Maryam Akhavan Aghdam, Jin, Hongpeng, Wu, Yanzhao
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 10.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Transformer-based Mixture-of-Experts (MoE) models have been driving several recent technological advancements in Natural Language Processing (NLP). These MoE models adopt a router mechanism to determine which experts to activate for routing input tokens. However, existing router mechanisms allocate a fixed number of experts to each token, which neglects the varying importance of different input tokens. In this study, we propose a novel dynamic router mechanism that Dynamically Allocates a variable number of experts for Mixture-of-Experts (DA-MoE) models based on an effective token importance measure. First, we show that the Transformer attention mechanism provides a natural and effective way of calculating token importance. Second, we propose a dynamic router mechanism that effectively decides the optimal number of experts (K) and allocates the top-K experts for each input token. Third, comprehensive experiments on several benchmark datasets demonstrate that our DA-MoE approach consistently outperforms the state-of-the-art Transformer based MoE model on the popular GLUE benchmark.
ISSN:2331-8422