Robust guarantees for learning an autoregressive filter
The optimal predictor for a linear dynamical system (with hidden state and Gaussian noise) takes the form of an autoregressive linear filter, namely the Kalman filter. However, a fundamental problem in reinforcement learning and control theory is to make optimal predictions in an unknown dynamical s...
Saved in:
Main Authors | , |
---|---|
Format | Journal Article |
Language | English |
Published |
23.05.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | The optimal predictor for a linear dynamical system (with hidden state and
Gaussian noise) takes the form of an autoregressive linear filter, namely the
Kalman filter. However, a fundamental problem in reinforcement learning and
control theory is to make optimal predictions in an unknown dynamical system.
To this end, we take the approach of directly learning an autoregressive filter
for time-series prediction under unknown dynamics. Our analysis differs from
previous statistical analyses in that we regress not only on the inputs to the
dynamical system, but also the outputs, which is essential to dealing with
process noise. The main challenge is to estimate the filter under worst case
input (in $\mathcal H_\infty$ norm), for which we use an $L^\infty$-based
objective rather than ordinary least-squares. For learning an autoregressive
model, our algorithm has optimal sample complexity in terms of the rollout
length, which does not seem to be attained by naive least-squares. |
---|---|
DOI: | 10.48550/arxiv.1905.09897 |