Error Estimation for Randomized Least-Squares Algorithms via the Bootstrap
Over the course of the past decade, a variety of randomized algorithms have been proposed for computing approximate least-squares (LS) solutions in large-scale settings. A longstanding practical issue is that, for any given input, the user rarely knows the actual error of an approximate solution (re...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
21.03.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Over the course of the past decade, a variety of randomized algorithms have
been proposed for computing approximate least-squares (LS) solutions in
large-scale settings. A longstanding practical issue is that, for any given
input, the user rarely knows the actual error of an approximate solution
(relative to the exact solution). Likewise, it is difficult for the user to
know precisely how much computation is needed to achieve the desired error
tolerance. Consequently, the user often appeals to worst-case error bounds that
tend to offer only qualitative guidance. As a more practical alternative, we
propose a bootstrap method to compute a posteriori error estimates for
randomized LS algorithms. These estimates permit the user to numerically assess
the error of a given solution, and to predict how much work is needed to
improve a "preliminary" solution. In addition, we provide theoretical
consistency results for the method, which are the first such results in this
context (to the best of our knowledge). From a practical standpoint, the method
also has considerable flexibility, insofar as it can be applied to several
popular sketching algorithms, as well as a variety of error metrics. Moreover,
the extra step of error estimation does not add much cost to an underlying
sketching algorithm. Finally, we demonstrate the effectiveness of the method
with empirical results. |
---|---|
DOI: | 10.48550/arxiv.1803.08021 |