Quad-tree based inter-view motion prediction

As a 3D video extension of Audio Video Coding Standard (AVS), 3D-AVS is being developed to improve the coding efficiency of multi-view video. Since multi-view video is composed of projections of the same scenery from different viewpoints at the same time instant, it contains a large amount of inter-...

Full description

Saved in:
Bibliographic Details
Published in2015 Visual Communications and Image Processing (VCIP) pp. 1 - 4
Main Authors Ji Ma, Na Zhang, Xiaopeng Fan, Ruiqin Xiong, Debin Zhao
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2015
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As a 3D video extension of Audio Video Coding Standard (AVS), 3D-AVS is being developed to improve the coding efficiency of multi-view video. Since multi-view video is composed of projections of the same scenery from different viewpoints at the same time instant, it contains a large amount of inter-view redundancies. To exploit the inter-view correlation, this paper presents a method to derive the motion parameters for a coding unit (CU) in the dependent view from the already coded inter-view picture. The algorithm is based on quad-tree partitioning, each CU could be recursively split into four sub-CUs of the same size and whether each sub-CU is further split is determined by comparing the derived motion parameters. Experimental results show that the proposed method provides 8.7% BD-rate saving for both video 1 and video 2 in low delay configuration and the BD-rate saving is up to 14.5% and 13.7% on video 1 and video 2 for Balloons. This method has been proposed and adopted into the 3D-AVS standard.
DOI:10.1109/VCIP.2015.7457861