Low Complexity Learning-Based QTMTT Partitioning Scheme for Inter Coding in VVC Encoder
The Versatile Video Coding (VVC) standard, finalized in 2020 by the Joint Video Experts Team (JVET) and the Video Coding Experts Group (VCEG), marks a major advancement in video compression technology, offering a 50% efficiency improvement over its predecessor, the High-Efficiency Video Coding (HEVC...
Saved in:
Published in | IEEE access Vol. 12; pp. 141088 - 141103 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
IEEE
2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The Versatile Video Coding (VVC) standard, finalized in 2020 by the Joint Video Experts Team (JVET) and the Video Coding Experts Group (VCEG), marks a major advancement in video compression technology, offering a 50% efficiency improvement over its predecessor, the High-Efficiency Video Coding (HEVC) standard. A key innovation in the VVC standard is the Quad Tree with nested Multi-Type Tree (QTMTT) structure, essential for the partitioning process. However, this enhancement has led to increased coding complexity, posing challenges for real-time applications. To address this, our paper focuses on optimizing the partitioning process in the VVC encoder under the Random Access (RA) configuration. We propose a novel approach that leverages inter-prediction by integrating both coding and motion information across inter-frames to enhance coding efficiency. This solution is implemented on the Fraunhofer Versatile Video Encoder (VVenC). It utilizes a set of lightweight Light Gradient Boosting Machine (LightGBM) binary classifiers to accurately predict the optimal split mode for each Coding Unit (CU). Consequently, our approach significantly accelerates the VVenC encoding process. Experimental results show that our method reduces the runtime of the slower preset by 43.21%, with only a slight bitrate increase of 2.9%. These improvements not only significantly reduce computational complexity but also outperform several existing state-of-the-art methods. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3469089 |