Machine Learning based Efficient QT-MTT Partitioning Scheme for VVC Intra Encoders

The next-generation Versatile Video Coding (VVC) standard introduces a new Multi-Type Tree (MTT) block partitioning structure that supports Binary-Tree (BT) and Ternary-Tree (TT) splits in both vertical and horizontal directions. This new approach leads to five possible splits at each block depth. I...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 33; no. 8; p. 1
Main Authors Tissier, Alexandre, Hamidouche, Wassim, Mdalsi, Souhaiel Belhadj Dit, Vanne, Jarno, Galpin, Franck, Menard, Daniel
Format Journal Article
LanguageEnglish
Published New York IEEE 01.08.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Institute of Electrical and Electronics Engineers
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The next-generation Versatile Video Coding (VVC) standard introduces a new Multi-Type Tree (MTT) block partitioning structure that supports Binary-Tree (BT) and Ternary-Tree (TT) splits in both vertical and horizontal directions. This new approach leads to five possible splits at each block depth. It thereby improves the coding efficiency of VVC over that of the preceding High Efficiency Video Coding (HEVC) standard, which only supports Quad-Tree (QT) partitioning with a single split per block depth. However, MTT also has brought a considerable impact on encoder computational complexity. This paper proposes a two-stage learning-based technique to tackle the complexity overhead of MTT in VVC intra encoders. In our scheme, the input block is first processed by a Convolutional Neural Network (CNN) to predict its spatial features through a vector of probabilities describing the partition at each 4×4 edge. Subsequently, a Decision Tree (DT) model leverages this vector of spatial features to predict the most likely splits at each block. Finally, based on this prediction, only the N most likely splits are processed by the Rate-Distortion (RD) process of the encoder. In order to train our CNN and DT models on a wide range of image contents, we also propose a public VVC frame partitioning dataset based on existing image dataset encoded with the VVC reference software encoder. Our solution relying on the top-3 configuration reaches 47.4% complexity reduction for a negligible bitrate increase of 0.79%. A top-2 configuration enables a higher complexity reduction of 70.4% for 2.49% bitrate loss. These results emphasize a better trade-off between VTM intra-coding efficiency and complexity reduction compared to the state-of-the-art solutions. The source code of the proposed method and the training dataset are made publicly available at GitHub.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2022.3232385