TY - JOUR
T1 - On rates of convergence for stochastic optimization problems under non-independent and identically distributed sampling*
AU - Homem-De-Mello, Tito
PY - 2008/6
Y1 - 2008/6
N2 - In this paper we discuss the issue of solving stochastic optimization problems by means of sample average approximations. Our focus is on rates of convergence of estimators of optimal solutions and optimal values with respect to the sample size. This is a well-studied problem in case the samples are independent and identically distributed (i.e., when standard Monte Carlo simulation is used); here we study the case where that assumption is dropped. Broadly speaking, our results show that, under appropriate assumptions, the rates of convergence for pointwise estimators under a sampling scheme carry over to the optimization case, in the sense that convergence of approximating optimal solutions and optimal values to their true counterparts has the same rates as in pointwise estimation. We apply our results to two well-established sampling schemes, namely, Latin hypercube sampling and randomized quasi-Monte Carlo (QMC). The novelty of our work arises from the fact that, while there has been some work on the use of variance reduction techniques and QMC methods in stochastic optimization, none of the existing work-to the best of our knowledge- has provided a theoretical study on the effect of these techniques on rates of convergence for the optimization problem. We present numerical results for some two-stage stochastic programs from the literature to illustrate the discussed ideas.
AB - In this paper we discuss the issue of solving stochastic optimization problems by means of sample average approximations. Our focus is on rates of convergence of estimators of optimal solutions and optimal values with respect to the sample size. This is a well-studied problem in case the samples are independent and identically distributed (i.e., when standard Monte Carlo simulation is used); here we study the case where that assumption is dropped. Broadly speaking, our results show that, under appropriate assumptions, the rates of convergence for pointwise estimators under a sampling scheme carry over to the optimization case, in the sense that convergence of approximating optimal solutions and optimal values to their true counterparts has the same rates as in pointwise estimation. We apply our results to two well-established sampling schemes, namely, Latin hypercube sampling and randomized quasi-Monte Carlo (QMC). The novelty of our work arises from the fact that, while there has been some work on the use of variance reduction techniques and QMC methods in stochastic optimization, none of the existing work-to the best of our knowledge- has provided a theoretical study on the effect of these techniques on rates of convergence for the optimization problem. We present numerical results for some two-stage stochastic programs from the literature to illustrate the discussed ideas.
KW - Latin hypercube sampling
KW - Monte Carlo simulation
KW - Quasi-Monte Carlo methods
KW - Sample average approximation
KW - Stochastic optimization
KW - Two-stage stochastic programming with recourse
KW - Variance reduction techniques
UR - http://www.scopus.com/inward/record.url?scp=67649498294&partnerID=8YFLogxK
U2 - 10.1137/060657418
DO - 10.1137/060657418
M3 - Article
AN - SCOPUS:67649498294
SN - 1052-6234
VL - 19
SP - 524
EP - 551
JO - SIAM Journal on Optimization
JF - SIAM Journal on Optimization
IS - 2
ER -