Unrolled primal-dual networks for lensless cameras

Conventional models for lensless imaging assume that each measurement results from convolving a given scene with a single experimentally measured point-spread function. These models fail to simulate lensless cameras truthfully, as these models do not account for optical aberrations or scenes with de...

Full description

Saved in:
Bibliographic Details
Published inOptics express Vol. 30; no. 26; pp. 46324 - 46335
Main Authors Kingshott, Oliver, Antipa, Nick, Bostan, Emrah, Akşit, Kaan
Format Journal Article
LanguageEnglish
Published United States 19.12.2022
Online AccessGet full text

Cover

Loading…
More Information
Summary:Conventional models for lensless imaging assume that each measurement results from convolving a given scene with a single experimentally measured point-spread function. These models fail to simulate lensless cameras truthfully, as these models do not account for optical aberrations or scenes with depth variations. Our work shows that learning a supervised primal-dual reconstruction method results in image quality matching state of the art in the literature without demanding a large network capacity. We show that embedding learnable forward and adjoint models improves the reconstruction quality of lensless images (+5dB PSNR) compared to works that assume a fixed point-spread function.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1094-4087
1094-4087
DOI:10.1364/OE.475521