Video Denoising via Empirical Bayesian Estimation of Space-Time Patches
In this paper we present a new patch-based empirical Bayesian video denoising algorithm. The method builds a Bayesian model for each group of similar space-time patches. These patches are not motion-compensated, and therefore avoid the risk of inaccuracies caused by motion estimation errors. The hig...
Saved in:
Published in | Journal of mathematical imaging and vision Vol. 60; no. 1; pp. 70 - 93 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
New York
Springer US
01.01.2018
Springer Nature B.V Springer Verlag |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this paper we present a new patch-based empirical Bayesian video denoising algorithm. The method builds a Bayesian model for each group of similar space-time patches. These patches are not motion-compensated, and therefore avoid the risk of inaccuracies caused by motion estimation errors. The high dimensionality of spatiotemporal patches together with a limited number of available samples poses challenges when estimating the statistics needed for an empirical Bayesian method. We therefore assume that groups of similar patches have a low intrinsic dimensionality, leading to a
spiked covariance model
. Based on theoretical results about the estimation of spiked covariance matrices, we propose estimators of the eigenvalues of the a priori covariance in high-dimensional spaces as simple corrections of the eigenvalues of the sample covariance matrix. We demonstrate empirically that these estimators lead to better empirical Wiener filters. A comparison on classic benchmark videos demonstrates improved visual quality and an increased PSNR with respect to state-of-the-art video denoising methods. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 0924-9907 1573-7683 |
DOI: | 10.1007/s10851-017-0742-4 |