Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs
The rapid advancement of large language models (LLMs) has led to architectures with billions to trillions of parameters, posing significant deployment challenges due to their substantial demands on memory, processing power, and energy consumption. Sparse Mixture-of-Experts (SMoE) architectures have...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , , , , , , , |
Format | Paper |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
01.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | The rapid advancement of large language models (LLMs) has led to architectures with billions to trillions of parameters, posing significant deployment challenges due to their substantial demands on memory, processing power, and energy consumption. Sparse Mixture-of-Experts (SMoE) architectures have emerged as a solution, activating only a subset of parameters per token, thereby achieving faster inference while maintaining performance. However, SMoE models still face limitations in broader deployment due to their large parameter counts and significant GPU memory requirements. In this work, we introduce a gradient-free evolutionary strategy named EEP (Efficient Expert P}runing) to enhance the pruning of experts in SMoE models. EEP relies solely on model inference (i.e., no gradient computation) and achieves greater sparsity while maintaining or even improving performance on downstream tasks. EEP can be used to reduce both the total number of experts (thus saving GPU memory) and the number of active experts (thus accelerating inference). For example, we demonstrate that pruning up to 75% of experts in Mixtral \(8\times7\)B-Instruct results in a substantial reduction in parameters with minimal performance loss. Remarkably, we observe improved performance on certain tasks, such as a significant increase in accuracy on the SQuAD dataset (from 53.4% to 75.4%), when pruning half of the experts. With these results, EEP not only lowers the barrier to deploying SMoE models,but also challenges the conventional understanding of model pruning by showing that fewer experts can lead to better task-specific performance without any fine-tuning. Code is available at https://github.com/imagination-research/EEP. |
---|---|
AbstractList | The rapid advancement of large language models (LLMs) has led to architectures with billions to trillions of parameters, posing significant deployment challenges due to their substantial demands on memory, processing power, and energy consumption. Sparse Mixture-of-Experts (SMoE) architectures have emerged as a solution, activating only a subset of parameters per token, thereby achieving faster inference while maintaining performance. However, SMoE models still face limitations in broader deployment due to their large parameter counts and significant GPU memory requirements. In this work, we introduce a gradient-free evolutionary strategy named EEP (Efficient Expert P}runing) to enhance the pruning of experts in SMoE models. EEP relies solely on model inference (i.e., no gradient computation) and achieves greater sparsity while maintaining or even improving performance on downstream tasks. EEP can be used to reduce both the total number of experts (thus saving GPU memory) and the number of active experts (thus accelerating inference). For example, we demonstrate that pruning up to 75% of experts in Mixtral \(8\times7\)B-Instruct results in a substantial reduction in parameters with minimal performance loss. Remarkably, we observe improved performance on certain tasks, such as a significant increase in accuracy on the SQuAD dataset (from 53.4% to 75.4%), when pruning half of the experts. With these results, EEP not only lowers the barrier to deploying SMoE models,but also challenges the conventional understanding of model pruning by showing that fewer experts can lead to better task-specific performance without any fine-tuning. Code is available at https://github.com/imagination-research/EEP. |
Author | Ning, Xuefei Blaschko, Matthew B Lin, Zinan Wang, Yu Zhu, Junyi Dai, Guohao Yang, Huazhong Liu, Enshu Shengen Yan |
Author_xml | – sequence: 1 givenname: Enshu surname: Liu fullname: Liu, Enshu – sequence: 2 givenname: Junyi surname: Zhu fullname: Zhu, Junyi – sequence: 3 givenname: Zinan surname: Lin fullname: Lin, Zinan – sequence: 4 givenname: Xuefei surname: Ning fullname: Ning, Xuefei – sequence: 5 givenname: Matthew surname: Blaschko middlename: B fullname: Blaschko, Matthew B – sequence: 6 fullname: Shengen Yan – sequence: 7 givenname: Guohao surname: Dai fullname: Dai, Guohao – sequence: 8 givenname: Huazhong surname: Yang fullname: Yang, Huazhong – sequence: 9 givenname: Yu surname: Wang fullname: Wang, Yu |
BookMark | eNqNi9FqwkAQRZdSobb6DwM-B9bdmEhfJdJCC6J9D4uZjRGdTWd2Qfr1ja0f4NPl3nPus3qkQPigxsbaebbMjXlSU5Gj1toUpVks7Fj9VN53-w4pQnXpkSNsOFFHLfjAsOsdC8Jnd4mJMQs--5cEPhy1ybUDCw2e5BUqOjjaX48b5OF7HhqCowa22KQ_8E4eGa_zKkiUiRp5dxKc3vJFzdbV1-ot6zl8J5RYH0NiGlBtdZkvi9zOtb3P-gWyo1AI |
ContentType | Paper |
Copyright | 2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
Copyright_xml | – notice: 2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. |
DBID | 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
DatabaseName | ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Academic Technology Collection ProQuest One Community College ProQuest Central SciTech Premium Collection (Proquest) (PQ_SDU_P3) ProQuest Engineering Collection Engineering Database Publicly Available Content Database ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection |
DatabaseTitle | Publicly Available Content Database Engineering Database Technology Collection ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest One Academic Engineering Collection |
DatabaseTitleList | Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Physics |
EISSN | 2331-8422 |
Genre | Working Paper/Pre-Print |
GroupedDBID | 8FE 8FG ABJCF ABUWG AFKRA ALMA_UNASSIGNED_HOLDINGS AZQEC BENPR BGLVJ CCPQU DWQXO FRJ HCIFZ L6V M7S M~E PIMPY PQEST PQQKQ PQUKI PRINS PTHSS |
ID | FETCH-proquest_journals_30748643103 |
IEDL.DBID | 8FG |
IngestDate | Thu Oct 10 23:00:15 EDT 2024 |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-proquest_journals_30748643103 |
OpenAccessLink | https://www.proquest.com/docview/3074864310?pq-origsite=%requestingapplication% |
PQID | 3074864310 |
PQPubID | 2050157 |
ParticipantIDs | proquest_journals_3074864310 |
PublicationCentury | 2000 |
PublicationDate | 20240701 |
PublicationDateYYYYMMDD | 2024-07-01 |
PublicationDate_xml | – month: 07 year: 2024 text: 20240701 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | Ithaca |
PublicationPlace_xml | – name: Ithaca |
PublicationTitle | arXiv.org |
PublicationYear | 2024 |
Publisher | Cornell University Library, arXiv.org |
Publisher_xml | – name: Cornell University Library, arXiv.org |
SSID | ssj0002672553 |
Score | 3.544221 |
SecondaryResourceType | preprint |
Snippet | The rapid advancement of large language models (LLMs) has led to architectures with billions to trillions of parameters, posing significant deployment... |
SourceID | proquest |
SourceType | Aggregation Database |
SubjectTerms | Energy consumption Inference Large language models Mixtures Parameters Performance enhancement Power consumption Pruning |
Title | Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs |
URI | https://www.proquest.com/docview/3074864310 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1JS8NAFH5og-DNFZdaBvQ6mGWaRC-CJbGKKaEq9FaSzASFksRMCuLB3-6baWIPQi8DwywMj8dbvnkLwBXjvkD3n9Nh6nDKeM5pklk4eO6NxRLfETqPO5q44zf2NBvOWsBNtmGVnUzUgpqXmcLIr5EXmY_q0zLvqk-qukap39W2hcY2GJbteYqr_fDhD2OxXQ8tZuefmNW6I9wDI04qUe_DligOYEeHXGbyEL4DXb0BhT7R9YYbEtdLhVIQtCPJS4UepyDRx5eC-GmZ09UmSZ5bhJGoNmYLeUuC4l0VzcCD8ToLgCQFJ1NVl1UtPHZpfWRUykYewWUYvI7GtHvwvGUpOV8TwDmGXlEW4gRIlnK0MTzTFKbDcnSefD9PWc54inZdzoan0N9009nm5XPYtVGHr6JT-9Br6qW4QB3cpANN6AEY98EknuIs-gl-AbJsk6Y |
link.rule.ids | 783,787,12779,21402,33387,33758,43614,43819 |
linkProvider | ProQuest |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LS8NAEB60RfTmEx9VB_S6mDbbJHrxUFJbbUvRCr2VJLtLBUliNgXx1zu7Te1B6GUv-2BZhvlmvp0HwC0XgST3X7B27ArGhRIsSpo0-N59k0eBK20e93Dk9d7587Q9rQg3XYVVrnSiVdQiSwxHfkeyyAOCz6bzmH8x0zXK_K5WLTS2oc5dAhqTKd59-uNYWp5PFrP7T81a7OjuQ30c5bI4gC2ZHsKODblM9BH8hLZ6Ayl9tPWGSxwXC8NSINmR-JaTxylx-PFtKH6WKbZcpHFQMYxo2ph96gcM07kpmkEbx-ssAIxSga-mLquZ6K_S-rCT6VIfw003nHR6bHXhWSVSerZ-APcEammWylPAJBZkY_iOIx2XK3KegkDFXHERk12nePsMGptOOt88fQ27vclwMBv0Ry8XsNciPF9GqjagVhYLeUl4XMZX9tF_AZTok70 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Efficient+Expert+Pruning+for+Sparse+Mixture-of-Experts+Language+Models%3A+Enhancing+Performance+and+Reducing+Inference+Costs&rft.jtitle=arXiv.org&rft.au=Liu%2C+Enshu&rft.au=Zhu%2C+Junyi&rft.au=Lin%2C+Zinan&rft.au=Ning%2C+Xuefei&rft.date=2024-07-01&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422 |