Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint

Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Mo...

Full description

Saved in:
Bibliographic Details
Published inThe Journal of artificial intelligence research Vol. 74; pp. 661 - 690
Main Authors Amanatidis, Georgios, Fusco, Federico, Lazos, Philip, Leonardi, Stefano, Reiffenhäuser, Rebecca
Format Journal Article
LanguageEnglish
Published San Francisco AI Access Foundation 01.01.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Moreover, frequently those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a 5.83-approximation and runs in O(n log n) time, i.e., at least a factor n faster than other state-of-the-art algorithms. The versatility of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a (9 + ε)-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1076-9757
1076-9757
1943-5037
DOI:10.1613/jair.1.13472