Chimera: A Lossless Decoding Method for Accelerating Large Language Models Inference by Fusing all Tokens
Large language models (LLMs) have demonstrated remarkable capabilities across various tasks. However, their widespread application is hindered by the resource-intensive decoding process. To address this challenge, current approaches have incorporated additional decoding heads to enable parallel pred...
Saved in:
Main Authors | , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
24.02.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Large language models (LLMs) have demonstrated remarkable capabilities across
various tasks. However, their widespread application is hindered by the
resource-intensive decoding process. To address this challenge, current
approaches have incorporated additional decoding heads to enable parallel
prediction of multiple subsequent tokens, thereby achieving inference
acceleration. Nevertheless, the accuracy of these decoding heads falls short of
the auto-regressive decoding approach.
In light of these limitations, we propose Chimera, a novel framework
specifically designed for speculative sampling. Within this framework, we
introduce a lightweight draft model that effectively utilizes previously
generated tokens to predict subsequent words. To ensure both accuracy and
efficiency, we present two strategies within the lightweight draft model.
Firstly, we focus on capturing short-range dependencies at the bottom layer.
Secondly, we leverage the readily available representations from the original
LLM.Through empirical evaluation on the Vicuna and LlaMA-2 series, Chimera
demonstrates impressive results, achieving an average latency speedup ratio of
2.7x compared to the vanilla auto-regressive decoding approach. This highlights
the potential of our proposed framework in significantly improving the
efficiency of large language models during the decoding process. |
---|---|
DOI: | 10.48550/arxiv.2402.15758 |