Layer-based sparse representation of multiview images

Abstact This article presents a novel method to obtain a sparse representation of multiview images. The method is based on the fact that multiview data is composed of epipolar-plane image lines which are highly redundant. We extend this principle to obtain the layer-based representation, which parti...

Full description

Saved in:
Bibliographic Details
Published inEURASIP journal on advances in signal processing Vol. 2012; no. 1; pp. 1 - 15
Main Authors Gelman, Andriy, Berent, Jesse, Dragotti, Pier Luigi
Format Journal Article
LanguageEnglish
Published Cham Springer International Publishing 09.03.2012
Springer Nature B.V
BioMed Central Ltd
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Abstact This article presents a novel method to obtain a sparse representation of multiview images. The method is based on the fact that multiview data is composed of epipolar-plane image lines which are highly redundant. We extend this principle to obtain the layer-based representation, which partitions a multiview image dataset into redundant regions (which we call layers) each related to a constant depth in the observed scene. The layers are extracted using a general segmentation framework which takes into account the camera setup and occlusion constraints. To obtain a sparse representation, the extracted layers are further decomposed using a multidimensional discrete wavelet transform (DWT), first across the view domain followed by a two-dimensional (2D) DWT applied to the image dimensions. We modify the viewpoint DWT to take into account occlusions and scene depth variations. Simulation results based on nonlinear approximation show that the sparsity of our representation is superior to the multi-dimensional DWT without disparity compensation. In addition we demonstrate that the constant depth model of the representation can be used to synthesise novel viewpoints for immersive viewing applications and also de-noise multiview images.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:1687-6180
1687-6172
1687-6180
DOI:10.1186/1687-6180-2012-61