A collection of Bayesian models of stochastic failure processes

dc.contributor.advisorDamien, Paul, 1960-
dc.contributor.advisorPress, William H.
dc.creatorKirschenmann, Thomas Harolden
dc.date.accessioned2013-11-06T22:08:14Zen
dc.date.issued2013-05en
dc.date.submittedMay 2013en
dc.date.updated2013-11-06T22:08:14Zen
dc.descriptiontexten
dc.description.abstractRisk managers currently seek new advances in statistical methodology to better forecast and quantify uncertainty. This thesis comprises a collection of new Bayesian models and computational methods which collectively aim to better estimate parameters and predict observables when data arise from stochastic failure processes. Such data commonly arise in reliability theory and survival analysis to predict failure times of mechanical devices, compare medical treatments, and to ultimately make well-informed risk management decisions. The collection of models proposed in this thesis advances the quality of those forecasts by providing computational modeling methodology to aid quantitative based decision makers. Through these models, a reliability expert will have the ability: to model how future decisions affect the process; to impose his prior beliefs on hazard rate shapes; to efficiently estimate parameters with MCMC methods; to incorporate exogenous information in the form of covariate data using Cox proportional hazard models; to utilize nonparametric priors for enhanced model flexibility. Managers are often forced to make decisions that affect the underlying distribution of a stochastic process. They regularly make these choices while lacking a mathematical model for how the process may itself depend significantly on their decisions. The first model proposed in this thesis provides a method to capture this decision dependency; this is used to make an optimal decision policy in the future, utilizing the interactions of the sequences of decisions. The model and method in this thesis is the first to directly estimate decision dependency in a stochastic process with the flexibility and power of the Bayesian formulation. The model parameters are estimated using an efficient Markov chain Monte Carlo technique, leading to predictive probability densities for the stochastic process. Using the posterior distributions of the random parameters in the model, a stochastic optimization program is solved to determine the sequence of decisions that minimise a cost-based objective function over a finite time horizon. The method is tested with artificial data and then used to model maintenance and failure time data from a condenser system at the South Texas Project Nuclear Operating Company (STPNOC). The second and third models proposed in this thesis offer a new way for survival analysts and reliability engineers to utilize their prior beliefs regarding the shape of hazard rate functions. Two generalizations of Weibull models have become popular recently, the exponentiated Weibull and the modified Weibull densities. The popularity of these models is largely due to the flexible hazard rate functions they can induce, such as bathtub, increasing, decreasing, and unimodal shaped hazard rates. These models are more complex than the standard Weibull, and without a Bayesian approach, one faces difficulties using traditional frequentist techniques to estimate the parameters. This thesis develops stylized families of prior distributions that should allow engineers to model their beliefs based on the context. Both models are first tested on artificial data and then compared when modeling a low pressure switch for a containment door at the STPNOC in Bay City, TX. Additionally, survival analysis is performed with these models using a famous collection of censored data about leukemia treatments. Two additional models are developed using the exponentiated and modified Weibull hazard functions as a baseline distribution to implement Cox proportional hazards models, allowing survival analysts to incorporate additional covariate information. Two nonparametric methods for estimating survival functions are compared using both simulated and real data from cancer treatment research. The quantile pyramid process is compared to Polya tree priors and is shown to have a distinct advantage due to the need for choosing a distribution upon which to center a Polya tree. The Polya tree and the quantile pyramid appear to have effectively the same accuracy when the Polya tree has a very well-informed choice of centering distribution. That is rarely the case, however, and one must conclude that the quantile pyramid process is at least as effective as Polya tree priors for modeling unknown situations.en
dc.description.departmentComputational Science, Engineering, and Mathematicsen
dc.format.mimetypeapplication/pdfen
dc.identifier.urihttp://hdl.handle.net/2152/21985en
dc.language.isoen_USen
dc.subjectBayesianen
dc.subjectSurvival analysisen
dc.subjectStochastic processesen
dc.subjectGibbsen
dc.subjectPolyaen
dc.subjectQuantileen
dc.titleA collection of Bayesian models of stochastic failure processesen
thesis.degree.departmentComputational Science, Engineering, and Mathematicsen
thesis.degree.disciplineComputational and Applied Mathematicsen
thesis.degree.grantorThe University of Texas at Austinen
thesis.degree.levelDoctoralen
thesis.degree.nameDoctor of Philosophyen

Access full-text files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
KIRSCHENMANN-DISSERTATION-2013.pdf
Size:
2.51 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
Name:
LICENSE_1.txt
Size:
1.85 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.85 KB
Format:
Plain Text
Description: