Perturbations, Optimization, and Statistics

A description of perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees. In nearly all machine learning, decisions must be made given current knowledge. Surprisingly, making what is believed to be the best decision is not alw...

Full description

Saved in:
Bibliographic Details
Main Authors Hazan, Tamir, Papandreou, George, Tarlow, Daniel
Format eBook Book
LanguageEnglish
Published Cambridge The MIT Press 2016
MIT Press
Edition1
SeriesNeural information processing series
Subjects
Online AccessGet full text
ISBN9780262035644
0262035642
DOI10.7551/mitpress/10761.001.0001

Cover

Abstract A description of perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees. In nearly all machine learning, decisions must be made given current knowledge. Surprisingly, making what is believed to be the best decision is not always the best strategy, even when learning in a supervised learning setting. An emerging body of work on learning under different rules applies perturbations to decision and learning procedures. These methods provide simple and highly efficient learning rules with improved theoretical guarantees. This book describes perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees, offering readers a state-of-the-art overview. Chapters address recent modeling ideas that have arisen within the perturbations framework, including Perturb & MAP, herding, and the use of neural networks to map generic noise to distribution over highly structured data. They describe new learning procedures for perturbation models, including an improved EM algorithm and a learning algorithm that aims to match moments of model samples to moments of data. They discuss understanding the relation of perturbation models to their traditional counterparts, with one chapter showing that the perturbations viewpoint can lead to new algorithms in the traditional setting. And they consider perturbation-based regularization in neural networks, offering a more complete understanding of dropout and studying perturbations in the context of deep neural networks.
AbstractList A description of perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees. In nearly all machine learning, decisions must be made given current knowledge. Surprisingly, making what is believed to be the best decision is not always the best strategy, even when learning in a supervised learning setting. An emerging body of work on learning under different rules applies perturbations to decision and learning procedures. These methods provide simple and highly efficient learning rules with improved theoretical guarantees. This book describes perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees, offering readers a state-of-the-art overview. Chapters address recent modeling ideas that have arisen within the perturbations framework, including Perturb & MAP, herding, and the use of neural networks to map generic noise to distribution over highly structured data. They describe new learning procedures for perturbation models, including an improved EM algorithm and a learning algorithm that aims to match moments of model samples to moments of data. They discuss understanding the relation of perturbation models to their traditional counterparts, with one chapter showing that the perturbations viewpoint can lead to new algorithms in the traditional setting. And they consider perturbation-based regularization in neural networks, offering a more complete understanding of dropout and studying perturbations in the context of deep neural networks.
A description of perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees.
A description of perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees.In nearly all machine learning, decisions must be made given current knowledge. Surprisingly, making what is believed to be the best decision is not always the best strategy, even when learning in a supervised learning setting. An emerging body of work on learning under different rules applies perturbations to decision and learning procedures. These methods provide simple and highly efficient learning rules with improved theoretical guarantees. This book describes perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees, offering readers a state-of-the-art overview.Chapters address recent modeling ideas that have arisen within the perturbations framework, including Perturb & MAP, herding, and the use of neural networks to map generic noise to distribution over highly structured data. They describe new learning procedures for perturbation models, including an improved EM algorithm and a learning algorithm that aims to match moments of model samples to moments of data. They discuss understanding the relation of perturbation models to their traditional counterparts, with one chapter showing that the perturbations viewpoint can lead to new algorithms in the traditional setting. And they consider perturbation-based regularization in neural networks, offering a more complete understanding of dropout and studying perturbations in the context of deep neural networks.
Author Papandreou, George
Tarlow, Daniel
Hazan, Tamir
Author_xml – sequence: 1
  fullname: Hazan, Tamir
– sequence: 2
  fullname: Papandreou, George
– sequence: 3
  fullname: Tarlow, Daniel
BackLink https://cir.nii.ac.jp/crid/1130000795327971456$$DView record in CiNii
BookMark eNo1kFtLwzAUxyNe0M19BvcgiLhLTq7No455gcEE9TmkbYpxXTubTMFPb9rqQ3L4w-_k5PwG6KiqK4vQBeCZ5BzmWxd2jfV-DlgKmGHcHgwHaICJIJRKRckhGimZtBlTLhg7QQPAQJlkiUxO0cj7j9hDSOwTcIZunm0T9k1qgqsrPxmvd8Ft3U8XJ2NT5eOXEIMPLvPn6LgwpbejvzpEb_fL18XjdLV-eFrcrqaGgExgSilnGWGJSTMBnJA0LYgiJP4Ach5nizw1Ms-KAmxRYM6pESoDwVKIlaR0iK77d43f2G__XpfB66_SpnW98fpvu25bFdmrnt019efe-qA7LLNVaEypl3cLroTATETysicr53Tm2huARhdYKk6JVBIYbzHcY1G27icC1q1-_a9fd_p1NKhb_fQXyKZzug
ContentType eBook
Book
Copyright_xml
DBID RYH
DEWEY 515/.392
DOI 10.7551/mitpress/10761.001.0001
DatabaseName CiNii Complete
DatabaseTitleList


DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
Mathematics
Statistics
EISBN 0262337932
9780262337939
Edition 1
Editor Papandreou, George
Tarlow, Daniel
Hazan, Tamir
Editor_xml – sequence: 1
  givenname: Tamir
  surname: Hazan
  fullname: Hazan, Tamir
– sequence: 2
  givenname: George
  surname: Papandreou
  fullname: Papandreou, George
– sequence: 3
  givenname: Daniel
  surname: Tarlow
  fullname: Tarlow, Daniel
ExternalDocumentID 9780262337939
EBC5966046
BB23157272
10_7551_mitpress_10761_001_0001
GroupedDBID -D2
38.
6IK
AABBV
AAOBU
ABFEK
ADMOD
ADRHR
AEGYG
AGSFV
ALMA_UNASSIGNED_HOLDINGS
BBABE
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
ECNEQ
MICIX
MIJRL
OCL
ABAZT
AHWGJ
RYH
ID FETCH-LOGICAL-a21781-3354c248abc61522bbf29227481d50026dba7dcff1eff0553a69c164b169c2b3
ISBN 9780262035644
0262035642
IngestDate Fri Jan 24 00:33:11 EST 2025
Fri May 30 21:31:51 EDT 2025
Thu Jun 26 22:34:05 EDT 2025
Tue Jun 18 19:44:54 EDT 2024
IsPeerReviewed false
IsScholarly false
LCCallNum_Ident Q325.5$b.H393 2016
Language English
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-a21781-3354c248abc61522bbf29227481d50026dba7dcff1eff0553a69c164b169c2b3
Notes Includes bibliographical references
OCLC 1013474878
PQID EBC5966046
PageCount 412
ParticipantIDs askewsholts_vlebooks_9780262337939
proquest_ebookcentral_EBC5966046
nii_cinii_1130000795327971456
mit_books_10_7551_mitpress_10761_001_0001
ProviderPackageCode MIJRL
PublicationCentury 2000
PublicationDate [2016]
PublicationDateYYYYMMDD 2016-01-01
PublicationDate_xml – year: 2016
  text: [2016]
PublicationDecade 2010
PublicationPlace Cambridge
PublicationPlace_xml – name: Cambridge
PublicationSeriesTitle Neural information processing series
PublicationYear 2016
Publisher The MIT Press
MIT Press
Publisher_xml – name: The MIT Press
– name: MIT Press
SSID ssj0002200161
Score 1.9631926
Snippet A description of perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees. In nearly...
A description of perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees.
A description of perturbation-based methods developed in machine learning to augment novel optimization methods with strong statistical guarantees.In nearly...
SourceID askewsholts
proquest
nii
mit
SourceType Aggregation Database
Publisher
SubjectTerms Algorithms
Computer Science
Machine learning
Machine Learning & Neural Networks
Mathematical optimization
Neural networks (Computer science)
Perturbation (Mathematics)
Statistics
TableOfContents Intro -- Contents -- Preface -- 1 Introduction -- 1.1 Scope -- 1.2 Regularization -- 1.3 Modeling -- 1.4 Roadmap -- 1.5 References -- 2 Perturb-and-MAP Random Fields -- 2.1 Energy-Based Models: Deterministic vs. Probabilistic Approaches -- 2.2 Perturb-and-MAP for Gaussian and Sparse Continuous MRFs -- 2.3 Perturb-and-MAP for MRFs with Discrete Labels -- 2.4 On the Representation Power of the Perturb-and-MAP Model -- 2.5 Related Work and Recent Developments -- 2.6 Discussion -- 2.7 References -- 3 Factorizing Shortest Paths with Randomized Optimum Models -- 3.1 Introduction -- 3.2 Building Structured Models: Design Considerations -- 3.3 Randomized Optimum Models (RandOMs) -- 3.4 Learning RandOMs -- 3.5 RandOMs for Image Registration -- 3.6 Shortest Path Factorization -- 3.7 Shortest Path Factorization with RandOMs -- 3.8 Experiments -- 3.9 Related Work -- 3.10 Discussion -- 3.11 References -- 4 Herding as a Learning System with Edge-of-Chaos Dynamics -- 4.1 Introduction -- 4.2 Herding Model Parameters -- 4.3 Generalized Herding -- 4.4 Experiments -- 4.5 Summary -- 4.6 Conclusion -- 4.8 References -- 5 Learning Maximum A-Posteriori Perturbation Models -- 5.1 Introduction -- 5.2 Background and Notation -- 5.3 Expressive Power of Perturbation Models -- 5.4 Higher Order Dependencies -- 5.5 Markov Properties and Perturbation Models -- 5.6 Conditional Distributions -- 5.7 Learning Perturbation Models -- 5.8 Empirical Results -- 5.9 Perturbation Models and Stability -- 5.10 Related Work -- 5.11 References -- 6 On the Expected Value of Random Maximum A-Posteriori Perturbations -- 6.1 Introduction -- 6.2 Inference and Random Perturbations -- 6.3 Low-Dimensional Perturbations -- 6.4 Empirical Evaluation -- 6.5 References -- 7 A Poisson Process Model for Monte Carlo -- 7.1 Introduction -- 7.2 Poisson Processes -- 7.3 Exponential Races -- 7.4 Gumbel Processes
7.5 Monte Carlo Methods That Use Bounds -- 7.6 Conclusion -- 7.9 References -- 8 Perturbation Techniques in Online Learning and Optimization -- 8.1 Introduction -- 8.2 Preliminaries -- 8.3 Gradient-Based Prediction Algorithm -- 8.4 Generic Bounds -- 8.5 Experts Setting -- 8.6 Euclidean Balls Setting -- 8.7 The Multi-Armed Bandit Setting -- 8.9 References -- 9 Probabilistic Inference by Hashing and Optimization -- 9.1 Introduction -- 9.2 Problem Statement and Assumptions -- 9.3 Approximate Model Counting via Randomized Hashing -- 9.4 Probabilistic Models and Approximate Inference: The WISH Algorithm -- 9.5 Optimization Subject to Parity Constraints -- 9.6 Applications -- 9.7 Open Problems and Research Challenges -- 9.8 Conclusion -- 9.9 References -- 10 Perturbation Models and PAC-Bayesian Generalization Bounds -- 10.1 Introduction -- 10.2 Background -- 10.3 PAC-Bayesian Generalization Bounds -- 10.4 Algorithms -- 10.5 The Bayesian Perspective -- 10.6 Approximate Inference -- 10.7 Empirical Evaluation -- 10.8 Discussion -- 10.9 References -- 11 Adversarial Perturbations of Deep Neural Networks -- 11.1 Introduction -- 11.2 Adversarial Examples -- 11.3 Adversarial Training -- 11.4 Generative Adversarial Networks -- 11.5 Discussion -- 11.6 References -- 12 Data Augmentation via L evy Processes -- 12.1 Introduction -- 12.2 Levy Thinning -- 12.3 Examples -- 12.4 Simulation Experiments -- 12.5 Discussion -- 12.6 Appendix: Proof of Theorem 12.4 -- 12.7 References -- 13 Bilu-Linial Stability -- 13.1 Introduction -- 13.2 Stable Instances of Graph Partitioning Problems -- 13.3 Stable Instances of Clustering Problems -- 13.4 References
Title Perturbations, Optimization, and Statistics
URI http://dx.doi.org/10.7551/mitpress/10761.001.0001
https://cir.nii.ac.jp/crid/1130000795327971456
https://ebookcentral.proquest.com/lib/[SITE_ID]/detail.action?docID=5966046
https://www.vlebooks.com/vleweb/product/openreader?id=none&isbn=9780262337939
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3JTsMwEB2xXODCLsqmCHFBEIgdO4mvbEJIwAUkbpHtJFIFFIkGDnw9z26atoCE4OK2aSeuZuzxe5kZm2hPqUJForChZpENBctMqIpCY7qXUcUA2SO_7eL1TXJ5L64e5MPoCC5fXVKbI_vxY13Jf6yKa7Crq5L9g2Xbm-IC3sO-aGFhtF_Ab_uxbg7YeMVSYQZP25yWbjHzn5uSyjYh00XZ_TbM49SefaX2bqDAgU0kYwx4H4gTj2NMLfWTF0yBgvDXn7u1z6X13DxNmAu6uO0a2cj1twl5oAJOKh_K5F7CJbz56PQ0zXKRcOc1wjPePsPi3CPGQdqckz8eyh9P9jhP87r_CK8Nj173sYzjd2h73e63JdCv63eLNFu6Yo8lmip7y7QwPOIiaDzeCh1MaPowGNfzYQAtByMtr9L9xfnd6WXYnC8RahCxjIVxLIXlItPGAthxbkzFFQdPB4iXjp0WRqeFrSpWVlUkZawTZcEvDcMrN_EazfReeuU6BZZJLYWGRwRjLpMKM6NQouA2ybjKkrhDu2MKyN-ffCi8n0_YskP70Es--OYXi3RoG9rLbde1zMUkAfaUjHmqUgZQ3KFgqNfc99Xk-ubnJ6fS7cgqko0_dLdJc6MBukUz9etbuQ00VpsdPyQ-AZzMKpU
linkProvider IEEE
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=book&rft.title=Perturbations%2C+Optimization%2C+and+Statistics&rft.date=2016-01-01&rft.pub=The+MIT+Press&rft.isbn=9780262337939&rft_id=info:doi/10.7551%2Fmitpress%2F10761.001.0001&rft.externalDocID=10_7551_mitpress_10761_001_0001
thumbnail_m http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fvle.dmmserver.com%2Fmedia%2F640%2F97802623%2F9780262337939.jpg