Variable Selection and Minimax Prediction in High-dimensional Functional Linear Model
High-dimensional functional data have become increasingly prevalent in modern applications such as high-frequency financial data and neuroimaging data analysis. We investigate a class of high-dimensional linear regression models, where each predictor is a random element in an infinite-dimensional fu...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
22.10.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | High-dimensional functional data have become increasingly prevalent in modern
applications such as high-frequency financial data and neuroimaging data
analysis. We investigate a class of high-dimensional linear regression models,
where each predictor is a random element in an infinite-dimensional function
space, and the number of functional predictors $p$ can potentially be
ultra-high. Assuming that each of the unknown coefficient functions belongs to
some reproducing kernel Hilbert space (RKHS), we regularize the fitting of the
model by imposing a group elastic-net type of penalty on the RKHS norms of the
coefficient functions. We show that our loss function is Gateaux
sub-differentiable, and our functional elastic-net estimator exists uniquely in
the product RKHS. Under suitable sparsity assumptions and a functional version
of the irrepresentable condition, we derive a non-asymptotic tail bound for
variable selection consistency of our method. Allowing the number of true
functional predictors $q$ to diverge with the sample size, we also show a
post-selection refined estimator can achieve the oracle minimax optimal
prediction rate. The proposed methods are illustrated through simulation
studies and a real-data application from the Human Connectome Project. |
---|---|
DOI: | 10.48550/arxiv.2310.14419 |