# Browsing by Subject "Reliability analysis"

Now showing 1 - 3 of 3

- Results Per Page
1 5 10 20 40 60 80 100

- Sort Options
Ascending Descending

Item Inverse source problems for focusing wave energy to targeted subsurface formations: theory and numerical experiments(2016-08) Karve, Pranav Madhav; Kallivokas, Loukas F.; Manuel, Lance; Ghattas, Omar; Fomel, Sergey; Lake, Larry; Huh, Chun; Stokoe II, KennethShow more Economically competitive and reliable methods for the removal of oil or contaminant particles from the pores of geological formations play a crucial role in petroleum engineering, hydro-geology, and environmental engineering. Post-earthquake observations at depleted oil fields as well as limited field experiments suggest that stress wave stimulation of a formation may lead to the release of particles trapped in its interstices. The stimulation can be applied using wave sources placed on or below the ground surface, and, typically, the effectiveness of the stimulation is proportional to the magnitude of the wave motion generated in the geological formation of interest. When wave sources are used to initiate the wave motion, equipment limitations and various sources of attenuation impose restrictions on the magnitude of the wave motion induced in the target formation. Thus, the engineering design of the wave energy delivery systems that are able to produce the wave motion of a required magnitude in the target zone is key to a successful mobilization of trapped interstitial particles. In this work, we discuss an inverse source approach that yields the optimal source time signals and source locations and could be used to design wave energy delivery systems. We cast the underlying forward wave propagation problem in two or three spatial dimensions. We model the target formation as an elastic or poroelastic inclusion embedded within heterogeneous, elastic, semi-infinite hosts. To simulate the semi-infiniteness of the elastic host, we augment the (finite) computational domain with a buffer of perfectly-matched-layers (PMLs). We define a metric of the wave motion generated in the target inclusion to quantify the amount of the delivered wave energy. The inverse source algorithm is based on a systematic framework of constrained optimization, where minimization of a suitably defined objective functional is tantamount to the maximization of the motion metric of the target formation. We demonstrate, via numerical experiments, that the algorithm is capable of converging to the spatial and temporal characteristics of surface loads that maximize energy delivery to the target formation. The numerical-simulation-based methodology is based on the assumption of perfect knowledge of the material properties and of the overall geometry of the geostructure of interest. In practice, however, precise knowledge of the properties of the geological formations is elusive, and quantification of the reliability of a deterministic approach is crucial for evaluating the technical and economical feasibility of the design. To this end, we also discuss a methodology that could be used to quantify the uncertainty in the wave energy delivery. Specifically, we treat the material properties of the layers as random variables, and perform a first-order uncertainty analysis of the elastodynamic system to compute the probabilities of failure to achieve threshold values of the motion metric. We illustrate the uncertainty quantification procedure for the case of two-dimensional, layered, isotropic, elastic host containing an elastic target inclusion. The inverse source and the uncertainty quantification methodologies, in conjunction, can be used for designing the characteristics of the wave sources used to deliver the wave energy to a targeted subsurface formation.Show more Item Reliability methods in dynamic system analysis(2012-12) Munoz, Brad Ernest; Longoria, Raul G.; Fahrenthold, Eric PShow more Standard techniques used to analyze a system's response with uncertain system parameters or inputs, are generally Importance sampling methods. Sampling methods require a large number of simulation runs before the system output statistics can be analyzed. As model fidelity increases, sampling techniques become computationally infeasible, and Reliability methods have gained popularity as an analysis method that requires significantly fewer simulation runs. Reliability analysis is an analytic technique which finds a particular point in the design space that can accurately be related to the probability of system failure. However, application to dynamic systems have remained limited. In the following thesis a First Order Reliability Method (FORM) is used to determine the failure probability of a dynamic system due to system/input uncertainties. A pendulum cart system is used as a case study to demonstrate the FORM on a dynamic system. Three failure modes are discussed which correspond to the maximum pendulum angle, the maximum system velocity, and a combined requirement that neither the maximum pendulum angle or system velocity are exceeded. An explicit formulation is generated from the implicit formulation using a Response Surface Methodology, and the FORM is performed using the explicit estimate. Although the analysis converges with minimal simulation computations, attempts to verify FORM results illuminate current limitations of the methodology. The results of this initial study conclude that, currently, sampling techniques are necessary to verify the FORM results, which restricts the potential applications of the FORM methodology. Suggested future work focuses on result verification without the use of Importance sampling which would allow Reliability methods to have widespread applicability.Show more Item Toward quantifying uncertainties in the performance and design of offshore structures(2021-07-30) Liu, Jinsong, Ph. D.; Manuel, Lance; Gilbert, Robert B; Johnson, Blair; Kinnas, Spyridon A; Walker, StephenShow more Accurate modeling and prediction of the long-term extreme response of offshore structures is of importance in their design. To ensure the operational safety of offshore systems over their service life, it must be verified that extreme load levels do not exceed the design resistance or capacity. A rational approach to formulate this problem is based on probabilistic design, which must account for uncertainties in the system and the environment in a direct manner. The most straightforward way to address this problem is by means of Monte Carlo Simulation (MCS) wherein all the sources of uncertainty are accounted for random sampling. However, due to the slow convergence rate of MCS and cost associated with response computations for each sample for dynamic systems, this method is not preferred in practice. This is especially true when dealing with low target probabilities of failure. To address noted challenges associated with the use of MCS, this study adopts the use of less expensive surrogate models that are representative of the original “truth” system that must be developed by means of a limited number of samples. Once established, MCS can be used with the surrogate model to assess any Quantity of Interest (QoI), such as a T-year return period load (for critical offshore systems, 50-100 year return periods are often of interest). For the surrogate models in this work, we apply the widely used Polynomial Chaos Expansion (PCE) method that seeks to represent the QoI original system, M(x), in terms of orthogonal polynomials that depend on a weight function f(x), defined by the probability distribution of all the underlying random variables. There are two key steps involved in building an accurate PCE surrogate model—first, one needs appropriate samples of the true system response for specified inputs and, then, one must estimate the polynomial expansion coefficients of the surrogate model using these sampled values. The accuracy of the PCE surrogate model developed in this manner depends not only on the number of evaluated samples but also on the specific locations of samples in the selected set. For engineering problems that are complex and for which the true response is expensive to evaluate, the number of samples used should be minimized. To this end, we explore advanced sampling approaches to extract as much information as possible regarding the true system by means of a limited number of samples. Specifically, instead of using simple MCS random samples, we will use optimal designs (D- and S-optimal designs, specifically) that offer an ‘optimal’ sample of small size from the domain of the input variables. Using these samples, then, Least-Angle Regression (LAR) is employed to build efficient “sparse” PCE models that improve the stability and accuracy of model prediction relative to ordinary PCE. To address the reliability problem where one is interested in rare events associated with very low probabilities of exceedance (or occurrence), a sequential sampling scheme is used with “exploration” samples that first span the entire domain of the variables; this is then followed by additional “exploitation” samples that seek to improve the accuracy of the surrogate model over-identified local domains of interest. Finally, a novel Christoffel Least Squares (CLS) approach is formulated and used to further reduce the required number of samples for accurate and stable PCE surrogate models. CLS relies on alternative but consistent distributions and associated orthogonal polynomial families for sampling that when used with appropriate weight functions lead to theoretically more efficient samples and are easily adapted in the PCE model building.Show more