Browsing by Subject "Monte Carlo simulation"
Now showing 1 - 15 of 15
- Results Per Page
- Sort Options
Item A cash flow model of carbon capture retrofit investment in Texas power generation using Monte Carlo simulation(2021-04-28) Murosko, Andrew Makary; Dyer, James S.The retrofitting of carbon capture systems to fossil-based powerplants is increasingly critical to decarbonization efforts within the power sector. However, private retrofit investment remains limited due to substantial project costs and risks without adequate incentives. Ambiguity in climate policy design, feasibility, and influence adds to the preexisting commercial and technical risks of retrofitting. Consequently, power producers must make retrofit decisions under considerable uncertainty. This research seeks to construct a flexible discounted cash flow (DCF) model to represent plant-level retrofit decision-making within the ERCOT power system of Texas. Through Monte Carlo simulation techniques, the model incorporates both user-determined and stochastic parameters representative of the technical, financial, and policy uncertainties associated with retrofitting coal and natural gas powerplants. The model separately evaluates both the net present value (NPV) of a hypothetical standalone retrofit project and the incremental NPV comparison to the existing, non-retrofitted status quo. Model results indicate minimal commerciality for CCS retrofitting within the Texas and ERCOT markets. Disparities in parameter sensitivity by plant fuel type, driven by differences in retrofit costs and CO₂ intensities, suggest that plant type partially dictates the efficacy of decarbonization policy proposals, including a carbon tax. Sensitivity results show that existing CCS incentives and policy designs overcompensate inefficient coal plants in maximizing electricity generation for the sake of subsequent capture of the CO₂ byproduct. Future policies will likely need to both reduce the upfront cost and offset post-retrofit additional operational and fuel costs to incentivize retrofitting. Overall, the model design establishes a simple, yet flexible, tool to improve the consensus on carbon capture retrofit sensitivities, policy design, and decision-making considerationsItem An assessment of the system costs and operational benefits of vehicle-to-grid schemes(2013-12) Harris, Chioke Bem; Webber, Michael E., 1971-With the emerging nationwide availability of plug-in electric vehicles (PEVs) at prices attainable for many consumers, electric utilities, system operators, and researchers have been investigating the impact of this new source of electricity demand. The presence of PEVs on the electric grid might offer benefits equivalent to dedicated utility-scale energy storage systems by leveraging vehicles' grid-connected energy storage through vehicle-to-grid (V2G) enabled infrastructure. Existing research, however, has not effectively examined the interactions between PEVs and the electric grid in a V2G system. To address these shortcomings in the literature, longitudinal vehicle travel data are first used to identify patterns in vehicle use. This analysis showed that vehicle use patterns are distinctly different between weekends and weekdays, seasonal interactions between vehicle charging, electric load, and wind generation might be important, and that vehicle charging might increase already high peak summer electric load in Texas. Subsequent simulations of PEV charging were performed, which revealed that unscheduled charging would increase summer peak load in Texas by approximately 1\%, and that uncertainty that arises from unscheduled charging would require only limited increases in frequency regulation procurements. To assess the market potential for the implementation of a V2G system that provides frequency regulation ancillary services, and might be able to provide financial incentives to participating PEV owners, a two-stage stochastic programming formulation of a V2G system operator was created. In addition to assessing the market potential for a V2G system, the model was also designed to determine the effect of the market power of the V2G system operator on prices for frequency regulation, the effect of uncertainty in real-time vehicle availability and state-of-charge on the aggregator's ability to provide regulation services, and the effect of different vehicle characteristics on revenues. Results from this model showed that the V2G system operator could generate revenue from participation in the frequency regulation market in Texas, even when subject to the uncertainty in real-time vehicle use. The model also showed that the V2G system operator would have a significant impact on prices, and thus as the number of PEVs participating in a V2G program in a given region increased, per-vehicle revenues, and thus compensation provided to vehicle owners, would decline dramatically. From these estimated payments to PEV owners, the decision to participate in a V2G program was analyzed. The balance between the estimated payments to PEV owners for participating in a V2G program and the increased probability of being left with a depleted battery as a result of V2G operations indicate that an owner of a range-limited battery electric vehicle (BEV) would probably not be a viable candidate for joining a V2G program, while a plug-in hybrid electric vehicle (PHEV) owner might find a V2G program worthwhile. Even for a PHEV owner, however, compensation for participating in a V2G program will provide limited incentive to join.Item Decomposition and variance reduction techniques for stochastic mixed integer programs(2018-08) Zolan, Alexander Joseph; Hasenbein, John J.; Morton, David P.; Bard, Jonathan F; Hanasusanto, Grani A; Newman, Alexandra MObtaining upper and lower bounds on the optimal value of a stochastic integer program can require solution of multiple-scenario problems, which are computationally expensive or intractable using off-the-shelf integer-programming software. Additionally, optimal solutions to a two-stage problem whose second stage spans long time horizons may be optimistic, due to the model's inappropriate ability to plan for future periods which are not known in practice. To that end, we present a framework for optimizing system design in the face of a restricted class of policies governing system operation, which aim to model realistic operation. This leads to a natural decomposition of the problem yielding upper and lower bounds which we can compute quickly. We illustrate these ideas using a model that seeks to design and operate a microgrid to support a forward operating base. Here, designing the microgrid includes specifying the number and type of diesel generators, PV systems, and batteries while operating the grid involves dispatching these assets to satisfy load at minimum cost. We extend our approach to solve the same problem under load and photovoltaic uncertainty, and propose a method to generate appropriately correlated scenarios by simulating building occupancy via a bottom-up approach, then using the occupancy levels to inform environmental control unit loads on the base. Finally, in a separate line of work, we optimize the design of the strata for a stratified sampling estimator to reduce variance. We extend this method to the multivariate setting by optimizing the strata for a nonuniform Latin hypercube estimator. We then present empirical results that show that our method reduces the variance of the estimator, compared to one using equal-probability strata.Item Depth resolved diffuse reflectance spectroscopy(2015-05) Hennessy, Richard J.; Markey, Mia Kathleen; Tunnell, James W.This dissertation focuses on the development of computational models and algorithms related to diffuse reflectance spectroscopy. Specifically, this work aims to advance diffuse reflectance spectroscopy to a technique that is capable of measuring depth dependent properties in tissue. First, we introduce the Monte Carlo lookup table (MCLUT) method for extracting optical properties from diffuse reflectance spectra. Next, we extend this method to a two-layer tissue geometry so that it can extract depth dependent properties in tissue. We then develop a computational model that relates photon sampling depth to optical properties and probe geometry. This model can be used to aid in design of application specific diffuse reflectance probes. In order to provide justification for using a two-layer model for extracting tissue properties, we show that the use of a one-layer model can lead to significant errors in the extracted optical properties. Lastly, we use our two-layer MCLUT model and a probe that was designed based on our sampling depth model to extract tissue properties from the skin of 80 subjects at 5 anatomical locations. The results agree with previously published values for skin properties and show that can diffuse reflectance spectroscopy can be used to measured depth dependent properties in tissue.Item Economic valuation : where is the value of the Pirsaat Project from?(2007-12) Guo, Qintao; Jablonowski, Christopher J.The value of an E&P project comes from the cash flows it produces. These cash flows are subject to the uncertainty of input parameters and are also affected by contingent decisions that change the course of the project. Three project valuation methods, discounted cash flow (DCF) method, Monte Carlo simulation and real option valuation (ROV) method, are utilized to evaluate a specific E&P project in the Pirsaat oil field in Azerbaijan. The DCF method and Monte Carlo simulation both follow predetermined paths, thus ignoring the value of managerial flexibility, also called options. As an extension of DCF, ROV can highlight the option values inherent in the project. Therefore, ROV provides more insights about the project value. However, there is no widely accepted ROV approach today. The integrated approach is adopted in this thesis, as it treats all sources of uncertainties as market uncertainty and technical uncertainty separately, thus more robust.Item Enhancing generalizability and feasibility in sample selection : a methodological study of cluster analysis for stratifying populations(2023-08) Furman, Gleb; Pustejovsky, James E.; Whittaker, Tiffany A.; Beretvas, Tasha; Tipton, Elizabeth; Roberts, Gregory JThis dissertation focuses on the critical need for research findings that are applicable and generalizable to diverse populations in the context of policy-making and funding allocation. Biases favoring majority groups often emerge due to overlooked variations within subgroups and inadequate sampling strategies. The objective of this study is to help address this issue by providing accessible and effective methods for selecting representative samples, with the ultimate goal of promoting the inclusion of diverse populations and ensuring unbiased estimation of main effects. In the realm of educational intervention research, randomized control trials (RCTs) have played a pivotal role in demonstrating efficacy. However, reliance on convenience sampling restricts the generalizability of findings beyond the study sample. Recent research has highlighted the lack of representative sampling in federally funded efficacy studies, necessitating the development of design-based approaches to enhance generalizability. The present study focuses specifically on stratified sampling using cluster analysis as a promising method for achieving representative samples. In this context, cluster analysis serves as a dimension reduction technique, enabling the population to be stratified based on covariates associated with treatment effect heterogeneity. The selected stratified samples facilitate population-level inference, addressing the limitations of convenience sampling. The primary aim of this study is to investigate the influence of various decisions in the cluster analysis process on the generalizability and feasibility of stratified sampling. By utilizing Monte Carlo simulation and real-world data, the findings shed light on the optimal number of high-quality strata that enhance generalizability without imposing significant recruitment challenges. These findings offer valuable guidance to researchers in effectively allocating resources and devising sampling strategies that maximize the impact of their study designs. Additionally, this study introduces a novel simulation design framework that can be extended for future methodological research. The framework offers flexibility in designing and testing recruitment strategies and accommodates various algorithms for modeling participation bias. By developing rigorous research designs that promote the inclusion of diverse populations, this study informs effective policy-making and funding allocation, ensuring that research findings are applicable to a broad range of demographic groups.Item Evaluation of data imputation techniques in pavement texture processing(2021-01-26) Sabillon-Orellana, Christian Andres; Prozzi, Jorge AlbertoThe importance of roads in modern society is without a doubt incomparable and nowadays, federal, state and local highway agencies are increasing the demands on the performance and serviceability of transportation infrastructure. It is no longer sufficient to have a pavement with enough structural capacity to sustain the demands of traffic. There are also growing demands to increase the functional properties of the road that are highly correlated with texture, such as skid resistance, proper drainage and smoothness. To better assess, compare and improve the functional properties of roads, there has been an effort to standardize the measurement methods of texture at highway speeds, based on surface profiles. But even standardizing the measurement methods is not enough to ultimately improve road functional properties if the processing of these profile data changes depending on who the analysist is. Therefore, meticulous studies need to be performed to determine what are the best practices when processing pavement texture data. This thesis studied the process of data imputation to determine what is the best imputation method based on their accuracy and computation time. The case study explored ten popular imputation methods, explained how they work, tested each of by means of Monte Carlo (MC) simulation, and ranked their efficiency using the Analytical Hierarchical Process (AHP). A two-tailed hypothesis test was used to make the final decision and determine whether the gain in imputation accuracy (if any) was statistically significant compared to the same statistic computed with missing data. Data imputation for texture data processing was proven to significantly increase the accuracy of estimates of texture summary statistics when a good imputation method was implemented. This study found that linear interpolation imputation was the best imputation technique not only because of its robustness and efficiency but also because of its simplicity and ease of implementation. However, it was also proven that using poor imputation techniques such as spline interpolation for gaps of missing data that are greater than ten data points can potentially yield biased estimation of pavement texture statistics that are significantly worse than simply computing that statistics using the data with missing entriesItem Experimental investigation of the beta distribution in Monte Carlo simulation(1981) Ganeriwala, Manju Surendra; Sullivan, Robert S.Item Highway case study investigation and sensitivity testing using the Project Evaluation Toolkit(2011-08) Fagnant, Daniel James; Kockelman, Kara; Xie, ChiAs transportation funding becomes increasingly constrained, it is imperative that decision makers invest precious resources wisely and effectively. Transportation planners need effective tools for anticipating outcomes (or ranges of outcomes) in order to select preferred project alternatives and evaluate funding options for competing projects. To this end, this thesis work describes multiple applications of a new Project Evaluation Toolkit (PET) for highway project assessment. The PET itself was developed over a two-year period by the thesis author, in conjunction with Dr. Kara Kockelman, Dr. Chi Xie, and some support by others, as described in Kockelman et al. (2010) and the PET Users Guidebook (Fagnant et al. 2011). Using just link-level traffic counts (and other parameter values, if users wish to change defaults), PET quickly estimates how transportation network changes impact traveler welfare (consisting of travel times and operating costs), travel time reliability, crashes, and emissions. Summary measures (such as net present values and benefit-cost ratios) are developed over multi-year/long-term horizons to quantify the relative merit of project scenarios. This thesis emphasizes three key topics: a background and description of PET, case study evaluations using PET, and sensitivity analysis (under uncertain inputs) using PET. The first section includes a discussion of PET’s purpose, operation and theoretical behavior, much of which is taken from Fagnant et al. (2010). The second section offers case studies on capacity expansion, road pricing, demand management, shoulder lane use, speed harmonization, incident management and work zone timing along key links in the Austin, Texas network. The final section conducts extensive sensitivity testing of results for two competing upgrade scenarios (one tolled, the other not); the work examines how input variations impact PET outputs over hundreds of model applications. Taken together, these investigations highlight PET’s capabilities while identifying potential shortcomings. Such findings allow transportation planners to better appreciate the impacts that various projects can have on the traveling public, how project evaluation may best be tackled, and how they may use PET to anticipate impacts of projects they may be considering, before embarking on more detailed analyses and finalizing investment decisions.Item Misspecification of longitudinal measurement invariance within the latent transition analysis framework(2021-01-25) Talley, Anna Elizabeth; Whittaker, Tiffany A.; Beretvas, Tasha; Kang, Hyeon-Ah; Powers, DanielAssessing the impact of violations to longitudinal measurement invariance (LMI) within a mixture modeling context is not well-covered territory in current methodological research, and is notably unexplored in latent transition analyses (LTA). At a minimum, it can be assumed that any substantial departure from LMI within the LTA framework would thwart unambiguous interpretations of the latent classes as well as the probabilities of transitioning in and out of each latent class over time. The intent of this dissertation is to initiate the conversation by providing some thoughts and examples of how LMI can manifest in LTA models, followed by a statistical assessment of the most straightforward violation to LMI in LTA: configural non-invariance, or unequal numbers of latent classes emerging at each time point in the population. Monte Carlo simulation methods were used to generate data exhibiting varying degrees of departure from configural LMI, then class enumeration decisions and parameter recovery were explored under LTA models that assume configural invariance. The conditions manipulated in this simulation include the pattern of non-invariance (i.e., classes merging or splitting over time), class homogeneity and separation, class prevalence splits in the non-invariant class, overall sample size, and the transition matrix design (i.e., ordered or unordered movement). By imposing a configurally invariant LTA model on data that are non-invariant in nature, the researcher is risking a complete misestimation of the number and type of latent classes that exist at a particular time point, particularly in terms of both under- and overestimated values of within-class agreement. For this reason, it is recommended that researchers make class enumeration decisions at each measurement occasion, based on time-specific latent class analyses (LCA), before fitting the overall LTA model to the data. Any non-invariance discovered at the LCA level can be substantively explored and modeled with a non-symmetrical LTA. However, if the best-fitting class solution must be made at the LTA level, results from this study suggest that the AIC and ABIC indices are preferable for their overfitting tendencies. It seems reasonable to prefer an overfitted lens for analyzing non-invariant data, due to the added flexibility of the additional parameters estimated, but the parsimony of an underfitted model may be preferable in certain situations. As per usual, larger sample sizes (in this study, N = 1,000) are protective against parameter bias and convergence issues.Item Monte Carlo simulation of the Jovian plasma torus interaction with Io’s atmosphere and the resultant aurora during eclipse(2011-08) Moore, Christopher Hudson; Goldstein, David Benjamin, doctor of aeronautics; Varghese, Philip L.; Raman, Venkatramanan; Trafton, Laurence M.; Combi, Michael R.Io, the innermost Galilean satellite of Jupiter, exhibits a wide variety of complex phenomena such as interaction with Jupiter’s magnetosphere, volcanic activity, and a rarefied multi-species sublimating and condensing atmosphere with an ionosphere. Io’s orbital resonance with Jupiter and the other Galilean satellites produces intense tidal heating. This makes Io the most volcanically active body in the solar system with plumes that rise hundreds of kilometers above the surface. In the present work, the interaction of Io’s atmosphere with the Jovian plasma torus is simulated via the Direct Simulation Monte Carlo (DSMC) method and the aurora produced via electron-neutral excitation collisions is examined using electron transport Monte Carlo simulation. The electron-transport Monte Carlo simulation models the electron collisions with the neutral atmosphere and their transport along field lines as they sweep past Io, using a pre-computed steady atmosphere and magnetic field. As input to the Monte Carlo simulation, the neutral atmosphere was first modeled using prior 2D sunlit continuum simulations of Io’s atmosphere produced by others. In order to justify the use of a sunlit atmosphere for eclipse, 1D two-species (SO2 and a non-condensable) DSMC simulations of Io’s atmospheric dynamics during and immediately after eclipse were performed. It was found that the inclusion of a non-condensable species (SO or O2) leads to the formation of a diffusion layer which prevents rapid collapse. The degree to which the diffusion layer slowed the atmospheric collapse was found to be extremely sensitive to both the initial non-condensable mole fraction and the reaction (or sticking) probability on the surface of the “non-condensable”. Furthermore, upon egress, vertical stratification of the atmosphere occurred with the non-condensable species being lifted to higher altitudes by the rapid sublimation of SO2 as the surface warms. Simulated aurorae (specifically the [OI] 6300 Å and the S2, SO, and SO2 molecular band emission in the middle ultraviolet) show good agreement with observations of Io in eclipse and an attempt was made to use the simulations to constrain the upstream torus electron temperature and Io’s atmospheric composition, structure, and volcanic activity. It is found that the position of the bright [OI] 6300 Å wake spot relative to Io’s equator depends on the position of Io relative to the plasma torus’ equator and the asymmetric electron number flux that results. Using HST/STIS UV-Vis spectra, the upstream electron temperature is weakly constrained to be between 3 eV and 8 eV depending on the flux of a low energy (35 eV), non-thermal component of the plasma (more non-thermal flux requires lower thermal plasma temperatures to fit the spectrum). Furthermore, an upper limit of 5% of the thermal torus density (or 180 cm−3 based on the Galileo J0 plasma density at Io) is obtained for the low energy non-thermal component of the plasma. These limits are consistent with Galileo observations of the upstream torus temperature and estimates for the the non-thermal component. Finally, plume activity and S2 content during eclipse observations with HST/STIS were constrained by examining the emission intensity along the spatial axis of the aperture. During the August 1999 UV-Vis observations, the auroral simulations indicate that the large volcanoes Pele and Surt were inactive whereas Tvashtar was active and that Dazhbog and possibly Loki were also actively venting gas. The S2 content inferred for the large Pele-type plumes was between 5% (Tvashtar) and 30% (Loki, if active), consistent with prior observations (Spencer et al., 2000; Jessup et al., 2007). A 3D DSMC simulation of Io’s sublimation and sputtered atmosphere including photo- and plasma-chemistry was developed. In future work these atmospheric simulations will replace the continuum target atmosphere in the auroral model and thus enable a better match to the observed high altitude auroral emission. In the present work, the plasma interaction is modeled by a flux of ions and electrons which flow around and through Io’s atmosphere along pre-computed fields and interact with the neutral gas. A 3D DSMC simulation of Io’s atmosphere assuming a simple thermal model for the surface just prior to ingress into eclipse and uniform frost coverage has been performed in order to understand how Io’s general atmospheric dynamics are affected by the new plasma model with chemistry and sputtering. Sputtering was found to supply most of the nightside atmosphere (producing an SO2 column of ~5×1013 cm−2); however, the dense dayside sublimation atmosphere was found to block sputtering of the surface. The influence of the dynamic plasma pressure on the day-to-night circumplanetary flow was found to be quite substantial causing the day-to-night wind across the dawn terminator to flow slightly towards the equator. This results in a region of high density near the equator that extends far (~2000 km for the condensable species) onto the nightside across the dawn terminator. Thus, even without thermal lag due to rotation or variable surface frost, highly asymmetric equatorial column densities relative to the subsolar point are obtained. The non-condensable O2, which is a trace species on the dayside, is the dominant species on the nightside despite increased SO2 sputtering because the loss rate of O2 is slow. Finally, a very intriguing O2 flow feature was observed near the dusk terminator where the flow from the leading hemisphere (pushed by the plasma) meets the flow from the dayside trailing hemisphere. Since the O2 does not condense on the surface, it slowly convects towards the poles and then back onto the nightside, eventually to be dissociated or stripped away by the plasma.Item Monte Carlo studies of polymer chain solubility in water(2005-12) Lu, Ying, 1972-; Sanchez, Isaac C., 1941-Poly (Ethylene Oxide) (PEO, with a general formula (CH₂-CH₂-O)[subscript pi] ) is completely soluble in water at room temperature over an extremely wide molecular weight range and has been widely studied by experiment and theory. The objective of our work is to study the solubility behavior by the method of Monte Carlo simulation. The insertion factor lnB, which is equivalent to the infinite dilute Henry's Law Constant, is used to represent the solubility of various molecules in water. Our research started with simple fluid and aqueous solutions of small molecules including hard spheres, inert gases, hydrocarbons and dimethyl ether (DME, as a precursor for PEO). Solubility consists of a favorable energy term and an unfavorable entropy term. Against the common belief of entropy-dominating-hydrophobicity effect, it is actually the ability of the solute to interact with solvent (or the energetic factor) that dominates solubility. The solubility minimum appearing for both hydrophobic and hydrophilic solutes along the water coexistence curve is the result of competition between the favorable energy contribution and the unfavorable entropy contribution. Normal alkanes with carbon number from 1 to 20 have been modeled by LJ chains to study the solubility of non-polar polymer chains in water. Various constraints have been put on the LJ model to evaluate their effect on solubility. No significant difference was observed for LJ chain with or without fixed bond angles, but torsional interaction changed the chain solubility dramatically. The temperature and chain-length effect on chain solubility has been examined and it can be explained by the balancing between the intra-chain interaction and entropy penalty. By choosing the right torsional interaction parameters we may be able to reproduce by simulations the solubility minimum of normal alkanes at C₁₁. PEO was modeled by united atom chains with length up to 30. The most probable distance between two nearest ether oxygens in both vacuum and aqueous solutions matches the hydrogen bond length in bulk water. Hydrogen bonding plays an important role in the unique water solubility behavior of PEO since the water-PEO interaction effectively increases the total number of hydrogen bonds and results in a favorable change in energy. A trans-gauche-trans conformation along the O-C-C-O bonds does enable hydrogen bond formation between one water molecule and two nearest or next nearest ether oxygens. A helix structure is not required for the PEO to have favorable interactions with water. Two polymers with similar structure as PEO but are insoluble in water: Poly (methylene oxide) (PMO) and Poly (propylene oxide) (PPO) have been studied to compare with PEO. Their difference in structure from PEO, though slight, reduces the chance of hydrogen bond forming between water and chains so as to decrease the solubility.Item Production through simulation : using simulation technologies to create and evaluate nuclear fuel fabrication facility designs(2021-01-26) Bourque, Cade Michael; Clarno, Kevin T.; Leibowicz, Benjamin D.This project explores how physics-based and discrete-event simulation technologies can be jointly employed to model a suite of nuclear fuel fabrication facility designs with variable radiological environments, optimize their design to lower operational costs subject to a set of design constraints, and evaluate their performance. Initially, three variants of U-20Pu-10Zr metallic nuclear fuel in three specific geometric configurations are modeled and studied across more than 50 physics simulations in nuclear physics software packages SCALE and Monte Carlo N-Particle Transport (MCNP6.2). From these tests, values for effective dose rate and criticality are recorded for each of the nine alloy-geometry configurations. These values are then incorporated into discrete-event models of fuel fabrication in the simulation program ExtendSim Pro 10 as process attributes. This allows one to track the dose to facility personnel as fuel is fabricated, dually enabling us to evaluate the safety of a given facility design in terms of absorbed dose per worker per year and to craft operational guidelines that adhere to federal and local safety regulations.Item Reliability modeling for capital project decisions(2010-08) Poulassichidis, Antonios; Ambler, Tony; McCann, R. BruceExploration and Production (E&P) project costs within the oil industry are continuously increasing reflecting a reality of more harsh environments, remote locations with minimal existing infrastructure and cost increases for materials and skilled resources. The significant capital expenditures translate to a number of projects either for new or revamped production facilities. Successful project completion requires a series of correct decisions throughout the project life-cycle namely design, construction, operations, maintenance and decommissioning. Using a Reliability, Availability and Maintainability (RAM) model as part of the project decision process is an E&P industry best practice that recently gained acceptance in Hess Corporation. This paper presents the RAM methodology and the gains from its application in a capital project.Item Uncertainty propagation and conjunction assessment for resident space objects(2015-12) Vittaldev, Vivek; Russell, Ryan Paul, 1976-; Erwin, Richard S; Akella, Maruthi R; Bettadpur, Srinivas V; Humphreys, Todd EPresently, the catalog of Resident Space Objects (RSOs) in Earth orbit tracked by the U.S. Space Surveillance Network (SSN) is greater than 21,000 objects. The size of the catalog continues to grow due to an increasing number of launches, improved tracking capabilities, and in some cases, collisions. Simply propagating the states of these RSOs is a computational burden, while additionally propagating the uncertainty distributions of the RSOs and computing collision probabilities increases the computational burden by at least an order of magnitude. Tools are developed that propagate the uncertainty of RSOs with Gaussian initial uncertainty from epoch until a close approach. The number of possible elements in the form of a precomputed library, in a Gaussian Mixture Model (GMM) has been increased and the strategy for multivariate problems has been formalized. The accuracy of a GMM is increased by propagating each element by a Polynomial Chaos Expansion (PCE). Both techniques reduce the number of function evaluations required for uncertainty propagation and result in a sliding scale where accuracy can be improved at the cost of increased computation time. A parallel implementation of the accurate benchmark Monte Carlo (MC) technique has been developed on the Graphics Processing Unit (GPU) that is capable of using samples from any uncertainty propagation technique to compute the collision probability. The GPU MC tool delivers up to two orders of magnitude speedups compared to a serial CPU implementation. Finally, a CPU implementation of the collision probability computations using Cartesian coordinates requires orders of magnitude fewer function evaluations compared to a MC run. Fast computation of the inherent nonlinear growth of the uncertainty distribution in orbital mechanics and accurately computing the collision probability is essential for maintaining a future space catalog and for preventing an uncontrolled growth in the debris population. The uncertainty propagation and collision probability computation methods and algorithms developed here are capable of running on personal workstations and stand to benefit users ranging from national space surveillance agencies to private satellite operators. The developed techniques are also applicable for many general uncertainty quantification and nonlinear estimation problems.