Deep robust image deblurring via blur distilling and information comparison in latent space

Current deep deblurring methods pay main attention to learning a transferring network to transfer synthetic blurred images to clean ones. Though achieving significant performance on the training datasets, they still suffer from a weaker generalization capability from training datasets to others with...

Full description

Saved in:
Bibliographic Details
Published inNeurocomputing (Amsterdam) Vol. 466; pp. 69 - 79
Main Authors Niu, Wenjia, Zhang, Kaihao, Luo, Wenhan, Zhong, Yiran, Li, Hongdong
Format Journal Article
LanguageEnglish
Published Elsevier B.V 27.11.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Current deep deblurring methods pay main attention to learning a transferring network to transfer synthetic blurred images to clean ones. Though achieving significant performance on the training datasets, they still suffer from a weaker generalization capability from training datasets to others with different synthetic blurs, thus resulting in significantly inferior performance on testing datasets. In order to alleviate this problem, we propose a latent contrastive model, Blur Distilling and Information Reconstruction Networks (BDIRNet), to learn image prior and improve the robustness of deep deblurring. The proposed BDIRNet consists of a blur removing network (DistillNet) and a reconstruction network (RecNet). Two kinds of images with almost the same information but different qualities are input into DistillNet to extract identical structure information via contrast latent information and purify the perturbations from other unimportant information like blur. While the RecNet is utilized to reconstruct sharp images based on the extracted information. In addition, inside the DistillNet and RecNet, a statistical anti-interference distilling (SAID) and anti-interference reconstruction (SAIR) modules are proposed to further enhance the robustness of our methods, respectively. Extensive experiments on different datasets show that the proposed methods achieve improved and robust results compared to recent state-of-the-art methods.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2021.09.019