Light Field Synthesis by Training Deep Network in the Refocused Image Domain

Light field imaging, which captures spatial-angular information of light incident on image sensors, enables many interesting applications such as image refocusing and augmented reality. However, due to the limited sensor resolution, a trade-off exists between the spatial and angular resolutions. To...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 29; pp. 6630 - 6640
Main Authors Liu, Chang-Le, Shih, Kuang-Tsu, Huang, Jiun-Woei, Chen, Homer H.
Format Journal Article
LanguageEnglish
Published United States IEEE 01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Light field imaging, which captures spatial-angular information of light incident on image sensors, enables many interesting applications such as image refocusing and augmented reality. However, due to the limited sensor resolution, a trade-off exists between the spatial and angular resolutions. To increase the angular resolution, view synthesis techniques have been adopted to generate new views from existing views. However, traditional learning-based view synthesis mainly considers the image quality of each view of the light field and neglects the quality of the refocused images. In this paper, we propose a new loss function called refocused image error (RIE) to address the issue. The main idea is that the image quality of the synthesized light field should be optimized in the refocused image domain because it is where the light field is viewed. We analyze the behavior of RIE in the spectral domain and test the performance of our approach against previous approaches on both real (INRIA) and software-rendered (HCI) light field datasets using objective assessment metrics such as MSE, MAE, PSNR, SSIM, and GMSD. Experimental results show that the light field generated by our method results in better refocused images than previous methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2020.2992354