Convex Optimization with Sparsity-Inducing Norms

The principle of parsimony is central to many areas of science: the simplest explanation of a given phenomenon should be preferred over more complicated ones. In the context of machine learning, it takes the form of variable or feature selection, and it is commonly used in two situations. First, to...

Full description

Saved in:
Bibliographic Details
Published inOptimization for Machine Learning pp. 19 - 49
Main Authors Bach, Francis, Jenatton, Rodolphe, Mairal, Julien, Obozinski, Guillaume
Format Book Chapter
LanguageEnglish
Published United States The MIT Press 30.09.2011
MIT Press
SeriesNeural information processing series
Subjects
Online AccessGet full text
ISBN026201646X
9780262016469
DOI10.7551/mitpress/8996.003.0004

Cover

Abstract The principle of parsimony is central to many areas of science: the simplest explanation of a given phenomenon should be preferred over more complicated ones. In the context of machine learning, it takes the form of variable or feature selection, and it is commonly used in two situations. First, to make the model or the prediction more interpretable or computationally cheaper to use, that is, even if the underlying problem is not sparse, one looks for the best sparse approximation. Second, sparsity can also be used given prior knowledge that the model should be sparse. For variable selection in linear
AbstractList An up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities. The interplay between optimization and machine learning is one of the most important developments in modern computational science. Optimization formulations and methods are proving to be vital in designing algorithms to extract essential knowledge from huge volumes of data. Machine learning, however, is not simply a consumer of optimization technology but a rapidly evolving field that is itself generating new optimization ideas. This book captures the state of the art of the interaction between optimization and machine learning in a way that is accessible to researchers in both fields. Optimization approaches have enjoyed prominence in machine learning because of their wide applicability and attractive theoretical properties. The increasing complexity, size, and variety of today's machine learning models call for the reassessment of existing assumptions. This book starts the process of reassessment. It describes the resurgence in novel contexts of established frameworks such as first-order methods, stochastic approximations, convex relaxations, interior-point methods, and proximal methods. It also devotes attention to newer themes such as regularized optimization, robust optimization, gradient and subgradient methods, splitting techniques, and second-order methods. Many of these techniques draw inspiration from other fields, including operations research, theoretical computer science, and subfields of optimization. The book will enrich the ongoing cross-fertilization between the machine learning community and these other fields, and within the broader optimization community.
The principle of parsimony is central to many areas of science: the simplest explanation of a given phenomenon should be preferred over more complicated ones. In the context of machine learning, it takes the form of variable or feature selection, and it is commonly used in two situations. First, to make the model or the prediction more interpretable or computationally cheaper to use, that is, even if the underlying problem is not sparse, one looks for the best sparse approximation. Second, sparsity can also be used given prior knowledge that the model should be sparse. For variable selection in linear
Author Francis Bach
Rodolphe Jenatton
Julien Mairal
Guillaume Obozinski
Author_xml – sequence: 1
  givenname: Francis
  orcidid: 0000-0001-8644-1058
  surname: Bach
  fullname: Bach, Francis
  organization: Statistical Machine Learning and Parsimony
– sequence: 2
  givenname: Rodolphe
  surname: Jenatton
  fullname: Jenatton, Rodolphe
  organization: Statistical Machine Learning and Parsimony
– sequence: 3
  givenname: Julien
  orcidid: 0000-0001-6991-2110
  surname: Mairal
  fullname: Mairal, Julien
  organization: Department of Statistics [Berkeley]
– sequence: 4
  givenname: Guillaume
  surname: Obozinski
  fullname: Obozinski, Guillaume
  organization: Statistical Machine Learning and Parsimony
BackLink https://hal.science/hal-00937150$$DView record in HAL
BookMark eNo1kF1PwyAUhjF-xE33F0xvvdgGnLbApVnUmSzuQk28I7SlK3MrFfBj_nppqgmEcHgfzskzRietbTVCVwTPWJaR-d6Ezmnv51yIfIYxxI3TIzQRjGOaUyo4Y3CMxv0FkzzNX8_QiGUceCoYPUcT77cRwRmIWB0hvLDtp_5O1l0we_OjgrFt8mVCkzx1ynkTDtOHtvooTbtJHq3b-0t0Wqud15O_8wK93N0-L5bT1fr-YXGzmjaUsjDNCS6KWmuVKsJpLTipOBQ05VxpqNOMQZkRXuaYElED1RR0wSmp4pBVqUQJF-h6-LdRO9k5s1fuIK0ycnmzkn0NYwGMZPiTxCwdsp2z7x_aB6kLa99K3QandmWjuqCdlwAggGAZF6QRmg9QdCr7uO8fesvy37LsLcdGIHvLkUgGYuuDdX_MVpYhZE2z6TYyh1_XunxX
ContentType Book Chapter
Copyright 2012 Massachusetts Institute of Technology
Distributed under a Creative Commons Attribution 4.0 International License
Copyright_xml – notice: 2012 Massachusetts Institute of Technology
– notice: Distributed under a Creative Commons Attribution 4.0 International License
DBID FFUUA
1XC
DOI 10.7551/mitpress/8996.003.0004
DatabaseName ProQuest Ebook Central - Book Chapters - Demo use only
Hyper Article en Ligne (HAL)
DatabaseTitleList

DeliveryMethod fulltext_linktorsrc
Discipline Philosophy
Computer Science
EISBN 9780262298773
0262298775
Editor Nowozin, Sebastian
Sra, Suvrit
Wright, Stephen J
Editor_xml – sequence: 1
  givenname: Suvrit
  surname: Sra
  fullname: Sra, Suvrit
– sequence: 2
  givenname: Sebastian
  surname: Nowozin
  fullname: Nowozin, Sebastian
– sequence: 3
  givenname: Stephen J
  surname: Wright
  fullname: Wright, Stephen J
EndPage 49
ExternalDocumentID oai_HAL_hal_00937150v1
EBC3339310_10_34
10_7551_mitpress_8996_003_0004
j.ctt5hhgpg.6
GroupedDBID -VX
05S
089
20A
38.
5O.
92K
A4J
AABBV
AAKGN
AANYM
AAOBU
AAZGR
ABARN
ABFEK
ABHES
ABIAV
ABIWA
ABMRC
ABQNV
ABRSK
ACHUA
ACLGV
ADVEM
AECLD
AEFEZ
AEGYG
AERYV
AGGIE
AHWGJ
AILDO
AJFER
AKHYG
ALMA_UNASSIGNED_HOLDINGS
AMYDA
APVFW
ATDNW
AZZ
BBABE
CDLGT
CZZ
C~9
DHNOV
DUGUG
E2F
EBBCW
EBSCA
EBZNK
GEOUK
HF4
IWG
JJU
K1X
L7C
MIJRL
MYL
O7H
PQQKQ
SUPCW
UE6
AAJDW
ABCYY
IVK
JLPMJ
~I6
FFUUA
1XC
ID FETCH-LOGICAL-h227t-610bbfeea4a182f981d83b2488ae3f4573c518c60219f32e23eb821d972dca9c3
ISBN 026201646X
9780262016469
IngestDate Fri May 09 12:14:55 EDT 2025
Mon Jun 16 02:16:11 EDT 2025
Tue Jun 18 19:10:15 EDT 2024
Sun Jun 29 13:28:14 EDT 2025
IsPeerReviewed false
IsScholarly false
Language English
License Distributed under a Creative Commons Attribution 4.0 International License: http://creativecommons.org/licenses/by/4.0
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-h227t-610bbfeea4a182f981d83b2488ae3f4573c518c60219f32e23eb821d972dca9c3
OCLC 758384972
ORCID 0000-0001-8644-1058
0000-0001-6991-2110
PQID EBC3339310_10_34
PageCount 31
ParticipantIDs hal_primary_oai_HAL_hal_00937150v1
proquest_ebookcentralchapters_3339310_10_34
mit_books_10_7551_mitpress_8996_003_0004
jstor_books_j_ctt5hhgpg_6
ProviderPackageCode MIJRL
PublicationCentury 2000
PublicationDate 20110930
2011
PublicationDateYYYYMMDD 2011-09-30
2011-01-01
PublicationDate_xml – month: 09
  year: 2011
  text: 20110930
  day: 30
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationSeriesTitle Neural information processing series
PublicationTitle Optimization for Machine Learning
PublicationYear 2011
Publisher The MIT Press
MIT Press
Publisher_xml – name: The MIT Press
– name: MIT Press
SSID ssj0000539758
Score 1.7275896
Snippet The principle of parsimony is central to many areas of science: the simplest explanation of a given phenomenon should be preferred over more complicated ones....
An up-to-date account of the interplay between optimization and machine learning, accessible to students and researchers in both communities. The interplay...
SourceID hal
proquest
mit
jstor
SourceType Open Access Repository
Publisher
StartPage 19
SubjectTerms Aesthetic judgment
Aesthetic simplicity
Aesthetics
Artificial Intelligence
Axiology
Beauty
Cardinality
Computer Science
Formal logic
Logic
Logical topics
Machine Learning
Machine Learning & Neural Networks
Mathematical logic
Mathematical relations
Mathematical set theory
Parsimony
Philosophy
Title Convex Optimization with Sparsity-Inducing Norms
URI https://www.jstor.org/stable/j.ctt5hhgpg.6
http://dx.doi.org/10.7551/mitpress/8996.003.0004
http://ebookcentral.proquest.com/lib/SITE_ID/reader.action?docID=3339310&ppg=34&c=UERG
https://hal.science/hal-00937150
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NT9swFH9i5TLtsG8tA6Zo4oCEMho7deLjhEDVtHIBJG5W7DrtDpSKhmnaX7_fc5KWwDRtu0SpW7vte8778u-9R7QvrIdsVEVivZiyg6IT63SWVBCM0ioLjcOJwpMzNb7MvlyNrjalLkJ2SW0_uZ-_zSv5H65iDHzlLNl_4Ox6UQzgHvzFFRzG9YHx2w-zNr3W8LBft1mUASw4CbhI35VMnd3fDccMLv9x2JsTQrDnyzLAMhLu4eE4bnAGK7YXDODopu7ONbp0Moi8Hnyj8RThagmhi7zpG_JQbuawm_Bnr7_VAX2LW7hhfEojQzXTja5YI_jgO_Ak000xPIELjfJpdvaEtkWm5HBA29CuJ5N11AuPvIaH0qRq8wJH3QJHvW-Epp8zMLXBiOIVPvZIWwYT4OIFPeO0kJjzNfALX9KWX7yi511TjLiVka9p2JA6vk_qmEkdPyJ1HEj9hi5PTy6Ox0nbliKZC5HXcLaH1lbel1kJ56zSsPixrQUkYelllY1y6UZp4RSsJ11J4YX0thDpVOdi6krt5FsaLG4W_h3FyhVciRWjNs1SB1NhqmBLVGWudKpLG9FHkMEsm8IjhkuBjz9_NTzGoagc1vz3NKIoUMnwVlzBm3N1PZrPZ8uZUREdgHLtO3_mWUSHHXlNOKhv0cGuoevKSCm1DJgFI7P3f7_wDj3dbNVdGtS3d34PllxtP7Tb4xdtlkhW
linkProvider ProQuest Ebooks
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=bookitem&rft.title=Optimization+for+Machine+Learning&rft.atitle=Convex+Optimization+with+Sparsity-Inducing+Norms&rft.date=2011-09-30&rft.pub=The+MIT+Press&rft.isbn=9780262298773&rft_id=info:doi/10.7551%2Fmitpress%2F8996.003.0004&rft.externalDocID=10_7551_mitpress_8996_003_0004
thumbnail_s http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Febookcentral.proquest.com%2Fcovers%2F3339310-l.jpg