HDR light field imaging of dynamic scenes: A learning-based method and a benchmark dataset

•A novel learning-based method is proposed for ghost-free high dynamic range (HDR) light field imaging.•A multi-scale architecture integrating deformable alignment module and angular embedding module is designed.•A new large-scale benchmark dataset is established to serve the HDR light field imaging...

Full description

Saved in:
Bibliographic Details
Published inPattern recognition Vol. 150; p. 110313
Main Authors Chen, Yeyao, Jiang, Gangyi, Yu, Mei, Jin, Chongchong, Xu, Haiyong, Ho, Yo-Sung
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•A novel learning-based method is proposed for ghost-free high dynamic range (HDR) light field imaging.•A multi-scale architecture integrating deformable alignment module and angular embedding module is designed.•A new large-scale benchmark dataset is established to serve the HDR light field imaging task for dynamic scenes.•The proposed method achieves superior spatial quality and preserves accurate angular consistency. Light field (LF) imaging is an effective way to enable immersive applications. However, limited by the potential well capacity of the image sensor, the acquired LF images suffer from low dynamic range and are thus prone to under-exposure or over-exposure. High dynamic range (HDR) LF imaging is an efficacious avenue to improve the LF imaging's dynamic range. Unfortunately, for dynamic scenes, existing methods are inclined to produce ghosting artifacts and lose details in the saturated regions, while potentially damaging the parallax structure of generated HDR LF images. To address the above challenges, in this paper, we propose a new ghost-free HDR LF imaging method based on a deformable aggregation and angular embedding network. Specifically, considering the four-dimensional geometric structure of the LF image, a deformable alignment module is first designed to handle dynamic regions in the spatial domain, and then the aligned spatial features are fully fused through an aggregation operation. Subsequently, an angular embedding module is constructed to explore angular information to enhance the aggregated spatial features. Based on this, the above two modules are cascaded in a multi-scale manner to achieve multi-level feature extraction and enhance the feature representation ability. Finally, a decoder is leveraged to recover the ghost-free HDR LF image from the enhanced multi-scale features. For performance evaluation, this paper establishes a large-scale benchmark dataset with multi-exposure inputs and ground truth images. Extensive experimental results show that the proposed method generates visually pleasing HDR LF images while preserving accurate angular consistency. Moreover, the proposed method surpasses the state-of-the-art methods in both quantitative and qualitative comparisons. The code and dataset will be available at https://github.com/YeyaoChen/HDRLFI.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2024.110313