Modeling Defocus-Disparity in Dual-Pixel Sensors

Most modern consumer cameras use dual-pixel (DP) sensors that provide two sub-aperture views of the scene in a single photo capture. The DP sensor was designed to assist the camera's autofocus routine, which examines local disparity in the two sub-aperture views to determine which parts of the...

Full description

Saved in:
Bibliographic Details
Published inIEEE International Conference on Computational Photography pp. 1 - 12
Main Authors Punnappurath, Abhijith, Abuolaim, Abdullah, Afifi, Mahmoud, Brown, Michael S.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.04.2020
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Most modern consumer cameras use dual-pixel (DP) sensors that provide two sub-aperture views of the scene in a single photo capture. The DP sensor was designed to assist the camera's autofocus routine, which examines local disparity in the two sub-aperture views to determine which parts of the image are out of focus. Recently, these DP views have been used for tasks beyond autofocus, such as synthetic bokeh, reflection removal, and depth reconstruction. These recent methods treat the two DP views as stereo image pairs and apply stereo matching algorithms to compute local disparity. However, dual-pixel disparity is not caused by view parallax as in stereo, but instead is attributed to defocus blur that occurs in out-of-focus regions in the image. This paper proposes a new parametric point spread function to model the defocus-disparity that occurs on DP sensors. We apply our model to the task of depth estimation from DP data. An important feature of our model is its ability to exploit the symmetry property of the DP blur kernels at each pixel. We leverage this symmetry property to formulate an unsupervised loss function that does not require ground truth depth. We demonstrate our method's effectiveness on both DSLR and smartphone DP data.
ISSN:2472-7636
DOI:10.1109/ICCP48838.2020.9105278