Decomposable Nonlocal Tensor Dictionary Learning for Multispectral Image Denoising

As compared to the conventional RGB or gray-scale images, multispectral images (MSI) can deliver more faithful representation for real scenes, and enhance the performance of many computer vision tasks. In practice, however, an MSI is always corrupted by various noises. In this paper we propose an ef...

Full description

Saved in:
Bibliographic Details
Published in2014 IEEE Conference on Computer Vision and Pattern Recognition pp. 2949 - 2956
Main Authors Peng, Yi, Meng, Deyu, Xu, Zongben, Gao, Chenqiang, Yang, Yi, Zhang, Biao
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.06.2014
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As compared to the conventional RGB or gray-scale images, multispectral images (MSI) can deliver more faithful representation for real scenes, and enhance the performance of many computer vision tasks. In practice, however, an MSI is always corrupted by various noises. In this paper we propose an effective MSI denoising approach by combinatorially considering two intrinsic characteristics underlying an MSI: the nonlocal similarity over space and the global correlation across spectrum. In specific, by explicitly considering spatial self-similarity of an MSI we construct a nonlocal tensor dictionary learning model with a group-block-sparsity constraint, which makes similar full-band patches (FBP) share the same atoms from the spatial and spectral dictionaries. Furthermore, through exploiting spectral correlation of an MSI and assuming over-redundancy of dictionaries, the constrained nonlocal MSI dictionary learning model can be decomposed into a series of unconstrained low-rank tensor approximation problems, which can be readily solved by off-the-shelf higher order statistics. Experimental results show that our method outperforms all state-of-the-art MSI denoising methods under comprehensive quantitative performance measures.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:1063-6919
1063-6919
2575-7075
DOI:10.1109/CVPR.2014.377