Chance-Constrained Submodular Knapsack Problem

In this study, we consider the chance-constrained submodular knapsack problem: Given a set of items whose sizes are random variables that follow probability distributions, and a nonnegative monotone submodular objective function, we are required to find a subset of items that maximizes the objective...

Full description

Saved in:
Bibliographic Details
Published inComputing and Combinatorics pp. 103 - 114
Main Authors Chen, Junjie, Maehara, Takanori
Format Book Chapter
LanguageEnglish
Published Cham Springer International Publishing 21.07.2019
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this study, we consider the chance-constrained submodular knapsack problem: Given a set of items whose sizes are random variables that follow probability distributions, and a nonnegative monotone submodular objective function, we are required to find a subset of items that maximizes the objective function subject to that the probability of total item size exceeding the knapsack capacity is at most a given threshold. This problem is a common generalization of the chance-constrained knapsack problem and submodular knapsack problem. Specifically, we considered two cases: the item sizes follow normal distributions, and the item sizes follow arbitrary but known distributions. For the normal distribution case, we propose an algorithm that finds a solution that has an expected profit of at least $$1 - e^{-1} - O(\epsilon )$$ to the optimal. For the arbitrary distribution case, we propose an algorithm that finds a solution that has the same approximation factor but satisfies the relaxed version of the constraint, which relaxes both the knapsack capacity and overflow probability. Here, both algorithms are built on the same strategy: reduce the chance constraint to a multidimensional knapsack constraint by guessing parameters, and solve the reduced multidimensional knapsack constrained submodular maximization problem by the continuous relaxation and rounding method.
Bibliography:Original Abstract: In this study, we consider the chance-constrained submodular knapsack problem: Given a set of items whose sizes are random variables that follow probability distributions, and a nonnegative monotone submodular objective function, we are required to find a subset of items that maximizes the objective function subject to that the probability of total item size exceeding the knapsack capacity is at most a given threshold. This problem is a common generalization of the chance-constrained knapsack problem and submodular knapsack problem. Specifically, we considered two cases: the item sizes follow normal distributions, and the item sizes follow arbitrary but known distributions. For the normal distribution case, we propose an algorithm that finds a solution that has an expected profit of at least \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1 - e^{-1} - O(\epsilon )$$\end{document} to the optimal. For the arbitrary distribution case, we propose an algorithm that finds a solution that has the same approximation factor but satisfies the relaxed version of the constraint, which relaxes both the knapsack capacity and overflow probability. Here, both algorithms are built on the same strategy: reduce the chance constraint to a multidimensional knapsack constraint by guessing parameters, and solve the reduced multidimensional knapsack constrained submodular maximization problem by the continuous relaxation and rounding method.
ISBN:9783030261757
3030261751
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-26176-4_9