Convergence rates for iteratively regularized Gauss–Newton method subject to stability constraints
In this paper we formulate the convergence rates of the iteratively regularized Gauss–Newton method by defining the iterates via convex optimization problems in a Banach space setting. We employ the concept of conditional stability to deduce the convergence rates in place of the well known concept o...
Saved in:
Published in | Journal of computational and applied mathematics Vol. 400; p. 113744 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
15.01.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In this paper we formulate the convergence rates of the iteratively regularized Gauss–Newton method by defining the iterates via convex optimization problems in a Banach space setting. We employ the concept of conditional stability to deduce the convergence rates in place of the well known concept of variational inequalities. To validate our abstract theory, we also discuss an ill-posed inverse problem that satisfies our assumptions. We also compare our results with the existing results in the literature. |
---|---|
ISSN: | 0377-0427 1879-1778 |
DOI: | 10.1016/j.cam.2021.113744 |