Meta Transferring for Deblurring

Most previous deblurring methods were built with a generic model trained on blurred images and their sharp counterparts. However, these approaches might have sub-optimal deblurring results due to the domain gap between the training and test sets. This paper proposes a reblur-deblur meta-transferring...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Po-Sheng, Liu, Fu-Jen Tsai, Yan-Tsung, Peng, Chung-Chi, Tsai, Chia-Wen, Lin, Yen-Yu, Lin
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 14.10.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most previous deblurring methods were built with a generic model trained on blurred images and their sharp counterparts. However, these approaches might have sub-optimal deblurring results due to the domain gap between the training and test sets. This paper proposes a reblur-deblur meta-transferring scheme to realize test-time adaptation without using ground truth for dynamic scene deblurring. Since the ground truth is usually unavailable at inference time in a real-world scenario, we leverage the blurred input video to find and use relatively sharp patches as the pseudo ground truth. Furthermore, we propose a reblurring model to extract the homogenous blur from the blurred input and transfer it to the pseudo-sharps to obtain the corresponding pseudo-blurred patches for meta-learning and test-time adaptation with only a few gradient updates. Extensive experimental results show that our reblur-deblur meta-learning scheme can improve state-of-the-art deblurring models on the DVD, REDS, and RealBlur benchmark datasets.
ISSN:2331-8422