On rates of convergence for stochastic optimization problems under non-independent and identically distributed sampling*

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

68 Citas (Scopus)

Resumen

In this paper we discuss the issue of solving stochastic optimization problems by means of sample average approximations. Our focus is on rates of convergence of estimators of optimal solutions and optimal values with respect to the sample size. This is a well-studied problem in case the samples are independent and identically distributed (i.e., when standard Monte Carlo simulation is used); here we study the case where that assumption is dropped. Broadly speaking, our results show that, under appropriate assumptions, the rates of convergence for pointwise estimators under a sampling scheme carry over to the optimization case, in the sense that convergence of approximating optimal solutions and optimal values to their true counterparts has the same rates as in pointwise estimation. We apply our results to two well-established sampling schemes, namely, Latin hypercube sampling and randomized quasi-Monte Carlo (QMC). The novelty of our work arises from the fact that, while there has been some work on the use of variance reduction techniques and QMC methods in stochastic optimization, none of the existing work-to the best of our knowledge- has provided a theoretical study on the effect of these techniques on rates of convergence for the optimization problem. We present numerical results for some two-stage stochastic programs from the literature to illustrate the discussed ideas.

Idioma originalInglés
Páginas (desde-hasta)524-551
Número de páginas28
PublicaciónSIAM Journal on Optimization
Volumen19
N.º2
DOI
EstadoPublicada - jun. 2008

Huella

Profundice en los temas de investigación de 'On rates of convergence for stochastic optimization problems under non-independent and identically distributed sampling*'. En conjunto forman una huella única.

Citar esto