Convergence rates for iteratively regularized Gauss–Newton method subject to stability constraints

In this paper we formulate the convergence rates of the iteratively regularized Gauss–Newton method by defining the iterates via convex optimization problems in a Banach space setting. We employ the concept of conditional stability to deduce the convergence rates in place of the well known concept o...

Full description

Saved in:
Bibliographic Details
Published inJournal of computational and applied mathematics Vol. 400; p. 113744
Main Authors Mittal, Gaurav, Giri, Ankik Kumar
Format Journal Article
LanguageEnglish
Published Elsevier B.V 15.01.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper we formulate the convergence rates of the iteratively regularized Gauss–Newton method by defining the iterates via convex optimization problems in a Banach space setting. We employ the concept of conditional stability to deduce the convergence rates in place of the well known concept of variational inequalities. To validate our abstract theory, we also discuss an ill-posed inverse problem that satisfies our assumptions. We also compare our results with the existing results in the literature.
ISSN:0377-0427
1879-1778
DOI:10.1016/j.cam.2021.113744