Convex Optimization with Sparsity-Inducing Norms

The principle of parsimony is central to many areas of science: the simplest explanation of a given phenomenon should be preferred over more complicated ones. In the context of machine learning, it takes the form of variable or feature selection, and it is commonly used in two situations. First, to...

Full description

Saved in:
Bibliographic Details
Published inOptimization for Machine Learning pp. 19 - 49
Main Authors Bach, Francis, Jenatton, Rodolphe, Mairal, Julien, Obozinski, Guillaume
Format Book Chapter
LanguageEnglish
Published United States The MIT Press 30.09.2011
MIT Press
SeriesNeural information processing series
Subjects
Online AccessGet full text
ISBN026201646X
9780262016469
DOI10.7551/mitpress/8996.003.0004

Cover

More Information
Summary:The principle of parsimony is central to many areas of science: the simplest explanation of a given phenomenon should be preferred over more complicated ones. In the context of machine learning, it takes the form of variable or feature selection, and it is commonly used in two situations. First, to make the model or the prediction more interpretable or computationally cheaper to use, that is, even if the underlying problem is not sparse, one looks for the best sparse approximation. Second, sparsity can also be used given prior knowledge that the model should be sparse. For variable selection in linear
ISBN:026201646X
9780262016469
DOI:10.7551/mitpress/8996.003.0004