Not All Layers of LLMs Are Necessary During Inference

Due to the large number of parameters, the inference phase of Large Language Models (LLMs) is resource-intensive. However, not all requests posed to LLMs are equally difficult to handle. Through analysis, we show that for some tasks, LLMs can achieve results comparable to the final output at some in...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Fan, Siqi, Jiang, Xin, Li, Xiang, Meng, Xuying, Han, Peng, Shang, Shuo, Sun, Aixin, Wang, Yequan, Wang, Zhongyuan
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 09.07.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Due to the large number of parameters, the inference phase of Large Language Models (LLMs) is resource-intensive. However, not all requests posed to LLMs are equally difficult to handle. Through analysis, we show that for some tasks, LLMs can achieve results comparable to the final output at some intermediate layers. That is, not all layers of LLMs are necessary during inference. If we can predict at which layer the inferred results match the final results (produced by evaluating all layers), we could significantly reduce the inference cost. To this end, we propose a simple yet effective algorithm named AdaInfer to adaptively terminate the inference process for an input instance. AdaInfer relies on easily obtainable statistical features and classic classifiers like SVM. Experiments on well-known LLMs like the Llama2 series and OPT, show that AdaInfer can achieve an average of 17.8% pruning ratio, and up to 43% on sentiment tasks, with nearly no performance drop (<1%). Because AdaInfer does not alter LLM parameters, the LLMs incorporated with AdaInfer maintain generalizability across tasks.
ISSN:2331-8422