Rethinking Pruning Large Language Models: Benefits and Pitfalls of Reconstruction Error Minimization
This work suggests fundamentally rethinking the current practice of pruning large language models (LLMs). The way it is done is by divide and conquer: split the model into submodels, sequentially prune them, and reconstruct predictions of the dense counterparts on small calibration data one at a tim...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.06.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This work suggests fundamentally rethinking the current practice of pruning
large language models (LLMs). The way it is done is by divide and conquer:
split the model into submodels, sequentially prune them, and reconstruct
predictions of the dense counterparts on small calibration data one at a time;
the final model is obtained simply by putting the resulting sparse submodels
together. While this approach enables pruning under memory constraints, it
generates high reconstruction errors. In this work, we first present an array
of reconstruction techniques that can significantly reduce this error by more
than $90\%$. Unwittingly, however, we discover that minimizing reconstruction
error is not always ideal and can overfit the given calibration data, resulting
in rather increased language perplexity and poor performance at downstream
tasks. We find out that a strategy of self-generating calibration data can
mitigate this trade-off between reconstruction and generalization, suggesting
new directions in the presence of both benefits and pitfalls of reconstruction
for pruning LLMs. |
---|---|
DOI: | 10.48550/arxiv.2406.15524 |