Unveiling and Consulting Core Experts in Retrieval-Augmented MoE-based LLMs
Retrieval-Augmented Generation (RAG) significantly improved the ability of Large Language Models (LLMs) to solve knowledge-intensive tasks. While existing research seeks to enhance RAG performance by retrieving higher-quality documents or designing RAG-specific LLMs, the internal mechanisms within L...
Saved in:
Main Authors | , , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
20.10.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Retrieval-Augmented Generation (RAG) significantly improved the ability of
Large Language Models (LLMs) to solve knowledge-intensive tasks. While existing
research seeks to enhance RAG performance by retrieving higher-quality
documents or designing RAG-specific LLMs, the internal mechanisms within LLMs
that contribute to the effectiveness of RAG systems remain underexplored. In
this paper, we aim to investigate these internal mechanisms within the popular
Mixture-of-Expert (MoE)-based LLMs and demonstrate how to improve RAG by
examining expert activations in these LLMs. Our controlled experiments reveal
that several core groups of experts are primarily responsible for RAG-related
behaviors. The activation of these core experts can signify the model's
inclination towards external/internal knowledge and adjust its behavior. For
instance, we identify core experts that can (1) indicate the sufficiency of the
model's internal knowledge, (2) assess the quality of retrieved documents, and
(3) enhance the model's ability to utilize context. Based on these findings, we
propose several strategies to enhance RAG's efficiency and effectiveness
through expert activation. Experimental results across various datasets and
MoE-based LLMs show the effectiveness of our method. |
---|---|
DOI: | 10.48550/arxiv.2410.15438 |