Unrolled primal-dual networks for lensless cameras
Conventional models for lensless imaging assume that each measurement results from convolving a given scene with a single experimentally measured point-spread function. These models fail to simulate lensless cameras truthfully, as these models do not account for optical aberrations or scenes with de...
Saved in:
Published in | Optics express Vol. 30; no. 26; pp. 46324 - 46335 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
United States
19.12.2022
|
Online Access | Get full text |
Cover
Loading…
Summary: | Conventional models for lensless imaging assume that each measurement results from convolving a given scene with a single experimentally measured point-spread function. These models fail to simulate lensless cameras truthfully, as these models do not account for optical aberrations or scenes with depth variations. Our work shows that learning a supervised primal-dual reconstruction method results in image quality matching state of the art in the literature without demanding a large network capacity. We show that embedding learnable forward and adjoint models improves the reconstruction quality of lensless images (+5dB PSNR) compared to works that assume a fixed point-spread function. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1094-4087 1094-4087 |
DOI: | 10.1364/OE.475521 |