Deep null space learning for inverse problems: convergence analysis and rates
Recently, deep learning based methods appeared as a new paradigm for solving inverse problems. These methods empirically show excellent performance but lack of theoretical justification; in particular, no results on the regularization properties are available. In particular, this is the case for two...
Saved in:
Published in | Inverse problems Vol. 35; no. 2; pp. 25008 - 25020 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
IOP Publishing
01.02.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recently, deep learning based methods appeared as a new paradigm for solving inverse problems. These methods empirically show excellent performance but lack of theoretical justification; in particular, no results on the regularization properties are available. In particular, this is the case for two-step deep learning approaches, where a classical reconstruction method is applied to the data in a first step and a trained deep neural network is applied to improve results in a second step. In this paper, we close the gap between practice and theory for a particular network structure in a two-step approach. For that purpose, we propose using so-called null space networks and introduce the concept of -regularization. Combined with a standard regularization method as reconstruction layer, the proposed deep null space learning approach is shown to be a -regularization method; convergence rates are also derived. The proposed null space network structure naturally preserves data consistency which is considered as key property of neural networks for solving inverse problems. |
---|---|
Bibliography: | IP-101836.R1 |
ISSN: | 0266-5611 1361-6420 |
DOI: | 10.1088/1361-6420/aaf14a |