# Browsing by Subject "Sensitivity analysis"

Now showing 1 - 9 of 9

- Results Per Page
1 5 10 20 40 60 80 100

- Sort Options
Ascending Descending

Item Analyzing voltage sag direction using protective relays and deep-learning methods(2023-04-21) Patha, Lekhaj; Santoso, SuryaShow more As the electricity demand continues to grow, power systems are becoming more complex and interconnected, making the need for reliable protection systems more important than ever. Protection systems are designed to detect and isolate faults and other abnormal conditions, preventing them from cascading through the power grid and causing widespread outages. The primary challenge in protection is to detect the fault, the type of fault, and the location of the fault. Traditional relays effectively locate, detect, and isolate faults. Circuit breakers, fuses, and relays, these devices work together to ensure the power system remains stable and reliable, under various conditions including system failures. Smart Intelligent relays (SIRs) are designed to perform a broader range of functions, such as fault location, and power quality monitoring. Machine learning techniques are increasingly applied in power systems protection to enhance fault detection and classification accuracy and speed. ML algorithms can be used to analyze real-time data from sensors and other devices to detect and classify faults, including those that may be too small or subtle to be detected by traditional protection systems. The thesis aims to study methods of identifying the direction of voltage sags in the distribution circuits. Voltage sags arise from the presence of short-circuit faults involving single, double, and three phase-to-ground conditions. The direction of the fault is based on the direction of power flow before the occurrence of the event. A fault can be classified as a downstream fault from a monitoring location if the direction of power flow is towards the fault location before the occurrence of the event. Similarly, a fault can be classified as an upstream fault if the direction of power flow is against the fault location before the occurrence of the fault. The terms upstream and downstream are relative to the monitor location. A downstream fault for one monitor can be an upstream fault for a different monitor. This thesis studies the applications of protective relaying and deep learning techniques in identifying the direction of voltage sags using the real-time waveforms of voltage and current to estimate whether the fault is upstream or downstream from the monitored location(s). The fault data was generated using a time-domain power system modeling tool with variable fault impedances and multiple fault locations. Relay-based approaches have been studied, and a deep-learning technique has been developed with the data generated. The relay-based techniques were capable of identifying the fault direction in all the cases irrespective of the fault location and fault duration. ML algorithms can help analyze large amounts of data and detect patterns that may be difficult or impossible for traditional protection systems to identify.Show more Item Application of dynamic optimization methods for foam floods in stratified reservoirs(2018-08-17) Tang, Brandon Chok-Yie; Nguyen, Quoc P.Show more Efficient recovery of oil from heavily stratified carbonate reservoirs can be very technically challenging, even when applying waterflood, gasflood, or WAG (water-alternating gas) processes. To date, relatively few field or pilot applications of foam flooding have been conducted due to an incomplete understanding of how foam will behave in the field. The reservoir of interest studied in this work is oil-wet and consists of a stratified upper high-permeability zone overlaying a lower low-permeability zone. This study seeks to assess the performance of the foam flooding process in oil recovery and develop an optimum field injection strategy based upon various objective functions. In the process, the impact of initial waterflooding and varying foam strength on the optimum project termination time, as well as the sensitivity of foam parameters on the optimum field injection strategy is investigated. Two main optimization techniques are tested: static optimization, where the injection parameters are set once at the beginning of the simulation, and dynamic optimization, where injection parameters are optimized in five-year intervals over the life of the well. The dynamic optimization was performed in two ways: a local dynamic optimization and an early-time weighted optimization. In general, the dynamic optimization outperformed the static optimization with respect to all objective functions. Over the course of the study, a variety of objective functions were utilized. The objective functions began with maximizing cumulative oil recovery and evolved to maximizing oil recovery while minimizing gas utilization ratio, and finally maximizing net present value (NPV). From the results, it was ultimately shown that the global dynamic optimization of NPV was the most useful way of obtaining a field injection strategy. The optimal process design parameters indicated that high volumes of surfactant as well as gas in the lower zone needed to be injected early in the life of the project to best maximize NPV. From the optimal termination time study, it was found that the optimal termination time for the project was around ten years. Varying extents of initial waterflooding and alteration of foam strength did not have an impact on the suggested termination time. From the foam strength sensitivity, it was found that among the factors (water saturation, oil saturation, surfactant concentration) considered, the maximum dry-out water saturation had the most profound impact on the NPV. Ultimately, this work develops the framework necessary to create a field injection strategy for foam flooding in the stratified oil-wet reservoir used in this study, but can be extended to other types of reservoirs.Show more Item Applications of Sensitivity Analysis in Petroleum Engineering(2007-05) Lawal, Azeez Adeyinka; Srinivasan, SanjayShow more A crucial question that may be asked during exploratory reservoir analyses and data gathering is: "What is the relative significance of different reservoir parameters?" A parameter is significant if the knowledge of its exact value results in an appreciable reduction in the uncertainty of model estimates. Sensitivity Analysis (SA) quantifies and apportions the uncertainty in a model's estimates based on the uncertainty in the model's parameters. Thus, SA can be used to determine the relative significance of reservoir parameters. This report starts with the SA of four models using differential analysis, regression, correlation and variance-decomposition methods. The models are the bi-linear, polynomial, Stock Tank Oil Originally in Place, and Panda and Lake equations. These models demonstrate the relative merits of the SA methods. The report culminates in the development of a variance-based Monte Carlo (MC) tank model that is useful in the SA of reservoir production rate, pressure and recovery forecasts. The model assumes the reservoir is homogeneous and undersaturated, and that the reservoir is produced by primary depletion mechanisms. It provides a choice between Latin Hypercube and Sobol sampling. These sampling methods are more efficient than conventional MC or random sampling. In the cases considered, the sensitivity effects of different reservoir parameters vary over the producing period. Parameters with negligible main effects can have significant joint effects.Show more Item Constraint sensitivity analysis of airline crew rostering using preferential bidding(2018-10-08) Palav, Vaibhav; Kutanoglu, ErhanShow more Airline crew rostering is an important part of airline operations that can save costs by optimized assignments of crew members to existing sequence of flights (pairings). Crew rostering with preferential bidding considers the bids by crew members that determine their preferences on specific pairings so that they obtain their preferred overall schedule (roster). This problem is traditionally solved as an extension of the classical set covering problem. In this report, we present a preferential bidding-based set covering model with all the preferences are treated as hard constraints and perform a sensitivity analysis of the model by varying all the controllable parameters to uncover the robustness of the model’s solutions under these changes. These parameters include the block hour hard limits, block hour soft limits, number of off-days in the planning horizon (a month) among others. The parameters are varied with respect to the base parameter values and the changes in the optimal solutions are studied with respect to the objective function value, computational time and other performance metrics (total block hours assigned, standard deviation of block hours across crew members, etc.) We observe which of these parameters have a large impact on the model and perform this sensitivity analysis to check the robustness of the base model. This report’s analysis is a first step towards creating a full-fledged model with additional extensions such as seniority-based bidding.Show more Item Development of reliable pavement models(2011-08) Aguiar Moya, José Pablo, 1981-; Prozzi, Jorge Alberto; Manuel, Lance; Walton, Michael; Machemehl, Randy B.; Yilmaz, HilalShow more As the cost of designing and building new highway pavements increases and the number of new construction and major rehabilitation projects decreases, the importance of ensuring that a given pavement design performs as expected in the field becomes vital. To address this issue in other fields of civil engineering, reliability analysis has been used extensively. However, in the case of pavement structural design, the reliability component is usually neglected or overly simplified. To address this need, the current dissertation proposes a framework for estimating the reliability of a given pavement structure regardless of the pavement design or analysis procedure that is being used. As part of the dissertation, the framework is applied with the Mechanistic-Empirical Pavement Design Guide (MEPDG) and failure is considered as a function of rutting of the hot-mix asphalt (HMA) layer. The proposed methodology consists of fitting a response surface, in place of the time-demanding implicit limit state functions used within the MEPDG, in combination with an analytical approach to estimating reliability using second moment techniques: First-Order and Second-Order Reliability Methods (FORM and SORM) and simulation techniques: Monte Carlo and Latin Hypercube Simulation. In order to demonstrate the methodology, a three-layered pavement structure is selected consisting of a hot-mix asphalt (HMA) surface, a base layer, and subgrade. Several pavement design variables are treated as random; these include HMA and base layer thicknesses, base and subgrade modulus, and HMA layer binder and air void content. Information on the variability and correlation between these variables are obtained from the Long-Term Pavement Performance (LTPP) program, and likely distributions, coefficients of variation, and correlation between the variables are estimated. Additionally, several scenarios are defined to account for climatic differences (cool, warm, and hot climatic regions), truck traffic distributions (mostly consisting of single unit trucks versus mostly consisting of single trailer trucks), and the thickness of the HMA layer (thick versus thin). First and second order polynomial HMA rutting failure response surfaces with interaction terms are fit by running the MEPDG under a full factorial experimental design consisting of 3 levels of the aforementioned design variables. These response surfaces are then used to analyze the reliability of the given pavement structures under the different scenarios. Additionally, in order to check for the accuracy of the proposed framework, direct simulation using the MEPDG was performed for the different scenarios. Very small differences were found between the estimates based on response surfaces and direct simulation using the MEPDG, confirming the accurateness of the proposed procedure. Finally, sensitivity analysis on the number of MEPDG runs required to fit the response surfaces was performed and it was identified that reducing the experimental design by one level still results in response surfaces that properly fit the MEPDG, ensuring the applicability of the method for practical applications.Show more Item Effects of patient heterogeneity in a first-come-first-serve kidney transplant model(2020-05-07) Chang, Chia-Hao; Shakkottai, Sanjay; Hasenbein, John J.Show more In this thesis, we discuss how patient death rates may affect patient choice in a kidney transplant system. Specifically, the transplant system is modeled as an M/M/1 queue with reneging, where patient and kidney arrivals are modeled as independent Poisson processes and a patient's death is referred to as reneging the queue. The patients face a problem in the form of a Markov decision process, to which we found the analytical solution when it is undiscounted and when the kidney qualities have discrete distributions. With the obtained result, we were able to examine the sensitivity analysis on the overall system. Next, we show that when the kidney distributions converge (in the sense of Kolmogorov metric), so do their associated value functions, from which we are able to extend our results to continuous distributions as well. Finally, our results are substantiated by numerical simulations.Show more Item Monte Carlo sensitivity analyses of DSMC parameters for ionizing hypersonic flows(2018-10-09) Higdon, Kyle J.; Goldstein, David Benjamin, doctor of aeronautics; Varghese, Philip L.; Liechty, Derek S; Cruden, Brett A; Raja, Laxminarayan LShow more This work focuses on the development and sensitivity analyses of a direct simulation Monte Carlo (DSMC) code to understand the complex physical processes that occur during hypersonic entry into a rarefied atmosphere. Simulations are performed on 1-dimensional hypersonic shock scenarios that mimic the conditions of high altitude atmospheric entry to Earth and Saturn with the Computation of Hypersonic Ionizing Particles in Shocks (CHIPS) code. To model hypersonic entry problems accurately, the CHIPS code must resolve nonequilibrium flows and account for a number of complex gas dynamics processes at the molecular level. In this thesis, several high temperature models are added to the CHIPS code including charged particle models and electronic excitation. These models are refined using preliminary sensitivity analyses resulting in improved electronic excitation models and a new backward chemical reaction model. The CHIPS simulations completed in this work reproduce rarefied hypersonic shock tube experiments performed in the Electric Arc Shock Tube (EAST) at NASA Ames Research Center. The CHIPS results are post-processed by the NEQAIR line-by-line radiative solver to compare directly to spectra measured experimentally in EAST. The DSMC techniques used to model hypersonic phenomena require numerous experimentally calibrated parameters. Many of these parameters are inferred from lower temperature experiments, resulting in an unknown amount of uncertainty in the simulated results at the extreme conditions of hypersonic flow. A global Monte Carlo sensitivity analysis is performed by simultaneously varying the CHIPS input parameter values to understand the sensitivity of experimentally measured quantities simulated by the CHIPS and NEQAIR codes. The sensitivity of several of these output quantities is used to rank the input parameters, identifying the most important parameters for the simulation of the hypersonic scenario. It was concluded that experimentally measured radiation intensity is most sensitive to the following key processes: N+e⁻⇌N⁺+e⁻+e⁻, NO+N⁺⇌N+NO⁺, N₂+N⇌N+N+N, N+O⇌NO⁺+e⁻, N+N⇌N₂⁺+e⁻, and Z [subscript elec] for N, O, and N₂⁺. In the future, this ranking can be used to identify which input parameters should be experimentally investigated, where model improvements could be beneficial, and aid in reducing the parameter space for DSMC calibrations to experimental data.Show more Item Sensitivity calculations on a soot model using a partially stirred reactor(2010-05) Wu, Nathan Gabriel; Raman, Venkat; Clemens, Noel T.Show more Sensitivity analysis was performed on a soot model using a partially stirred reactor (PaSR) in order to determine the effects of mixing model parameters on soot scalar values. The sensitivities of the mixture fraction zeta and progress variable C to the mixing model constant C_phi were calculated; these values were used to compute the sensitivity of water mass fraction Y_H2O to C_phi and several soot quantities to soot moments. Results were validated by evaluating the mean mixture fraction sensitivity and a long simulation time case. From the baseline case, it was noted that soot moment sensitivities tended to peak on the rich side of the stoichiometric mixture fraction zeta_st. Timestep, number of notional particles, mixing timescale tau_mix, and residence time tau_res were varied independently. Choices for timestep and notional particle count were shown to be sufficient to capture relevant scalar profiles, and did not greatly affect sensitivity calculations. Altering tau_mix or tau_res was shown to affect sensitivity to mixing, and it was concluded that the soot model is more heavily influenced by the chemistry than mixing.Show more Item Statistical methods for the analysis of DSMC simulations of hypersonic shocks(2012-05) Strand, James Stephen; Goldstein, David Benjamin, doctor of aeronautics; Moser, Robert; Varghese, Philip; Ezekoye, Ofodike; Prudencio, ErnestoShow more In this work, statistical techniques were employed to study the modeling of a hypersonic shock with the Direct Simulation Monte Carlo (DSMC) method, and to gain insight into how the model interacts with a set of physical parameters. Direct Simulation Monte Carlo (DSMC) is a particle based method which is useful for simulating gas dynamics in rarefied and/or highly non-equilibrium flowfields. A DSMC code was written and optimized for use in this research. The code was developed with shock tube simulations in mind, and it includes a number of improvements which allow for the efficient simulation of 1D, hypersonic shocks. Most importantly, a moving sampling region is used to obtain an accurate steady shock profile from an unsteady, moving shock wave. The code is MPI parallel and an adaptive load balancing scheme ensures that the workload is distributed properly between processors over the course of a simulation. Global, Monte Carlo based sensitivity analyses were performed in order to determine which of the parameters examined in this work most strongly affect the simulation results for two scenarios: a 0D relaxation from an initial high temperature state and a hypersonic shock. The 0D relaxation scenario was included in order to examine whether, with appropriate initial conditions, it can be viewed in some regards as a substitute for the 1D shock in a statistical sensitivity analysis. In both analyses sensitivities were calculated based on both the square of the Pearson correlation coefficient and the mutual information. The quantity of interest (QoI) chosen for these analyses was the NO density profile. This vector QoI was broken into a set of scalar QoIs, each representing the density of NO at a specific point in time (for the relaxation) or a specific streamwise location (for the shock), and sensitivities were calculated for each scalar QoI based on both measures of sensitivity. The sensitivities were then integrated over the set of scalar QoIs to determine an overall sensitivity for each parameter. A weighting function was used in the integration in order to emphasize sensitivities in the region of greatest thermal and chemical non-equilibrium. The six parameters which most strongly affect the NO density profile were found to be the same for both scenarios, which provides justification for the claim that a 0D relaxation can in some situations be used as a substitute model for a hypersonic shock. These six parameters are the pre-exponential constants in the Arrhenius rate equations for the N2 dissociation reaction N2 + N ⇄ 3N, the O2 dissociation reaction O2 + O ⇄ 3O, the NO dissociation reactions NO + N ⇄ 2N + O and NO + O ⇄ N + 2O, and the exchange reactions N2 + O ⇄ NO + N and NO + O ⇄ O2 + N. After identification of the most sensitive parameters, a synthetic data calibration was performed to demonstrate that the statistical inverse problem could be solved for the 0D relaxation scenario. The calibration was performed using the QUESO code, developed at the PECOS center at UT Austin, which employs the Delayed Rejection Adaptive Metropolis (DRAM) algorithm. The six parameters identified by the sensitivity analysis were calibrated successfully with respect to a group of synthetic datasets.Show more