Deep null space learning for inverse problems: convergence analysis and rates

Recently, deep learning based methods appeared as a new paradigm for solving inverse problems. These methods empirically show excellent performance but lack of theoretical justification; in particular, no results on the regularization properties are available. In particular, this is the case for two...

Full description

Saved in:
Bibliographic Details
Published inInverse problems Vol. 35; no. 2; pp. 25008 - 25020
Main Authors Schwab, Johannes, Antholzer, Stephan, Haltmeier, Markus
Format Journal Article
LanguageEnglish
Published IOP Publishing 01.02.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Recently, deep learning based methods appeared as a new paradigm for solving inverse problems. These methods empirically show excellent performance but lack of theoretical justification; in particular, no results on the regularization properties are available. In particular, this is the case for two-step deep learning approaches, where a classical reconstruction method is applied to the data in a first step and a trained deep neural network is applied to improve results in a second step. In this paper, we close the gap between practice and theory for a particular network structure in a two-step approach. For that purpose, we propose using so-called null space networks and introduce the concept of -regularization. Combined with a standard regularization method as reconstruction layer, the proposed deep null space learning approach is shown to be a -regularization method; convergence rates are also derived. The proposed null space network structure naturally preserves data consistency which is considered as key property of neural networks for solving inverse problems.
Bibliography:IP-101836.R1
ISSN:0266-5611
1361-6420
DOI:10.1088/1361-6420/aaf14a