Monte Carlo sampling-based methods in stochastic programming
MetadataShow full item record
Many problems in business, engineering and science involve uncertainties but optimization of such complex systems is often done in practice with deterministic model parameters. Stochastic programming extends deterministic optimization by incorporating random variables and probabilistic statements. A major challenge in the analysis of large-scale stochastic systems is having to consider a large number, sometimes an infinite number, of scenarios. This usually leads to intractable models, even when specially-designed algorithms are used. A natural question that arises then is how to use a limited number of these scenarios and still obtain reasonable solutions to our problems. In this dissertation, we focus on Monte Carlo sampling-based methods for solving large-scale stochastic programs. Given a candidate solution, suggested as an approximate solution to the original problem, the first question we address is how to assess its quality. Determining whether a solution is of high quality (optimal or near optimal) is a fundamental question in optimization theory and algorithms. We define quality via the optimality gap and develop sampling-based procedures to form confidence intervals on this gap. Compared to an earlier procedure that requires solution of many optimization problems, our procedures require solving only one or two optimization problems. We discuss a number of enhancements to our basic procedure and present computational results. Next, we develop sequential sampling procedures for assessing solution quality, which control the sampling error of the confidence interval on the optimality gap. We present two methods, a fully sequential method, where we increase the sample size one by one, and an accelerated method, where we increase the sample size in jumps. We prove asymptotic validity of these confidence intervals and present computational results. Finally, using our results on assessing solution quality, we propose a sequential sampling procedure to solve stochastic programs. In this procedure, the sample size is sequentially increased until a stopping criterion is satisfied. The stopping rule depends on the optimality gap estimate of the current candidate solution and its sampling variance. We show asymptotically that this procedure finds a solution within a desired quality tolerance with high probability. We present preliminary computational results and discuss implementation issues.