Rethinking Learned Image Compression: Context is All You Need
Since LIC has made rapid progress recently compared to traditional methods, this paper attempts to discuss the question about 'Where is the boundary of Learned Image Compression(LIC)?'. Thus this paper splits the above problem into two sub-problems:1)Where is the boundary of rate-distortio...
Saved in:
Main Author | |
---|---|
Format | Journal Article |
Language | English |
Published |
16.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Since LIC has made rapid progress recently compared to traditional methods,
this paper attempts to discuss the question about 'Where is the boundary of
Learned Image Compression(LIC)?'. Thus this paper splits the above problem into
two sub-problems:1)Where is the boundary of rate-distortion performance of
PSNR? 2)How to further improve the compression gain and achieve the boundary?
Therefore this paper analyzes the effectiveness of scaling parameters for
encoder, decoder and context model, which are the three components of LIC. Then
we conclude that scaling for LIC is to scale for context model and decoder
within LIC. Extensive experiments demonstrate that overfitting can actually
serve as an effective context. By optimizing the context, this paper further
improves PSNR and achieves state-of-the-art performance, showing a performance
gain of 14.39% with BD-RATE over VVC. |
---|---|
DOI: | 10.48550/arxiv.2407.11590 |