Art and Science of Quantizing Large-Scale Models: A Comprehensive Overview

This paper provides a comprehensive overview of the principles, challenges, and methodologies associated with quantizing large-scale neural network models. As neural networks have evolved towards larger and more complex architectures to address increasingly sophisticated tasks, the computational and...

Full description

Saved in:
Bibliographic Details
Main Authors Wang, Yanshu, Yang, Tong, Liang, Xiyan, Wang, Guoan, Lu, Hanning, Zhe, Xu, Li, Yaoming, Weitao, Li
Format Journal Article
LanguageEnglish
Published 17.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper provides a comprehensive overview of the principles, challenges, and methodologies associated with quantizing large-scale neural network models. As neural networks have evolved towards larger and more complex architectures to address increasingly sophisticated tasks, the computational and energy costs have escalated significantly. We explore the necessity and impact of model size growth, highlighting the performance benefits as well as the computational challenges and environmental considerations. The core focus is on model quantization as a fundamental approach to mitigate these challenges by reducing model size and improving efficiency without substantially compromising accuracy. We delve into various quantization techniques, including both post-training quantization (PTQ) and quantization-aware training (QAT), and analyze several state-of-the-art algorithms such as LLM-QAT, PEQA(L4Q), ZeroQuant, SmoothQuant, and others. Through comparative analysis, we examine how these methods address issues like outliers, importance weighting, and activation quantization, ultimately contributing to more sustainable and accessible deployment of large-scale models.
DOI:10.48550/arxiv.2409.11650