Prediction Modeling With Many Correlated and Zero‐Inflated Predictors: Assessing the Nonnegative Garrote Approach
ABSTRACT Building prediction models from mass‐spectrometry data is challenging due to the abundance of correlated features with varying degrees of zero‐inflation, leading to a common interest in reducing the features to a concise predictor set with good predictive performance given the experiments...
Saved in:
Published in | Statistics in medicine Vol. 44; no. 8-9; pp. e70062 - n/a |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Hoboken, USA
John Wiley & Sons, Inc
01.04.2025
Wiley Subscription Services, Inc |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | ABSTRACT
Building prediction models from mass‐spectrometry data is challenging due to the abundance of correlated features with varying degrees of zero‐inflation, leading to a common interest in reducing the features to a concise predictor set with good predictive performance given the experiments' resource‐intensive nature. In this study, we established and examined regularized regression approaches designed to address zero‐inflated and correlated predictors. In particular, we describe a novel two‐stage regularized regression approach (ridge‐garrote) explicitly modeling zero‐inflated predictors using two component variables, comprising a ridge estimator in the first stage and subsequently applying a nonnegative garrotte estimator in the second stage. We contrasted ridge‐garrote with one‐stage methods (ridge, lasso) and other two‐stage regularized regression approaches (lasso‐ridge, ridge‐lasso) for zero‐inflated predictors. We assessed the predictive performance and predictor selection properties of these methods in a comparative simulation study and a real‐data case study with the aim to predict kidney function using peptidomic features derived from mass‐spectrometry. In the simulation study, the predictive performance of all assessed approaches was comparable, yet the ridge‐garrote approach consistently selected more parsimonious models compared to its competitors in most scenarios. While lasso‐ridge achieved higher predictive accuracy than its competitors, it exhibited high variability in the number of selected predictors. Ridge‐lasso exhibited slightly superior predictive accuracy than ridge‐garrote but at the expense of selecting more noise predictors. Overall, ridge emerged as a favorable option when variable selection is not a primary concern, while ridge‐garrote demonstrated notable practical utility in selecting a parsimonious set of predictors, with only minimal compromise in predictive accuracy. |
---|---|
Bibliography: | Funding This work was supported by the Austrian Academy of Sciences, from which Mariella Gregorich received funding as part of the DOC fellowship. ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 Funding: This work was supported by the Austrian Academy of Sciences, from which Mariella Gregorich received funding as part of the DOC fellowship. |
ISSN: | 0277-6715 1097-0258 1097-0258 |
DOI: | 10.1002/sim.70062 |