Sequential Sampling to Myopically Maximize the Expected Value of Information

Statistical selection procedures are used to select the best of a finite set of alternatives, where "best" is defined in terms of each alternative's unknown expected value, and the expected values are inferred through statistical sampling. One effective approach, which is based on a B...

Full description

Saved in:
Bibliographic Details
Published inINFORMS journal on computing Vol. 22; no. 1; pp. 71 - 80
Main Authors Chick, Stephen E, Branke, Jurgen, Schmidt, Christian
Format Journal Article
LanguageEnglish
Published Linthicum INFORMS 01.01.2010
Institute for Operations Research and the Management Sciences
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Statistical selection procedures are used to select the best of a finite set of alternatives, where "best" is defined in terms of each alternative's unknown expected value, and the expected values are inferred through statistical sampling. One effective approach, which is based on a Bayesian probability model for the unknown mean performance of each alternative, allocates samples based on maximizing an approximation to the expected value of information (EVI) from those samples. The approximations include asymptotic and probabilistic approximations. This paper derives sampling allocations that avoid most of those approximations to the EVI but entails sequential myopic sampling from a single alternative per stage of sampling. We demonstrate empirically that the benefits of reducing the number of approximations in the previous algorithms are typically outweighed by the deleterious effects of a sequential one-step myopic allocation when more than a few dozen samples are allocated. Theory clarifies the derivation of selection procedures that are based on the EVI.
ISSN:1091-9856
1526-5528
1091-9856
DOI:10.1287/ijoc.1090.0327