Browsing by Subject "Mathematical modeling"
Now showing 1 - 9 of 9
- Results Per Page
- Sort Options
Item A reacting jet direct numerical simulation for assessing combustion model error(2021-03-12) Reuter, Bryan William; Moser, Robert deLancey; Bisetti, Fabrizio; Dawson, Clint; Ghattas, Omar; Oliver, ToddThe simulation of turbulent combustion systems is a vital tool in the design and development of new technologies for power generation, transportation, defense applications, and industrial heating. In an engineering design cycle, modeling realistic device configurations in a cost- and time-effective manner is required. Due to their flexibility and computational tractability, Reynolds-Averaged Navier-Stokes (RANS)-based models are most commonly used for these purposes. However, these models are known to be inadequate. Turbulent combustion is the coupling of two multiscale, nonlinear phenomena which individually have many modeling challenges. Hence, it is unsurprising that the modeling ansatzes and simplifying assumptions which lead to these practical RANS-based models are suspect. Since RANS-based models will continue to be the dominant tool for turbulent combustion simulation, it is necessary to improve their predictivity through a better understanding of their deficiencies. The are three main modeling issues for turbulent combustion: modeling the turbulent flow, representing the chemical reactions, and capturing the interaction between the turbulence and the chemistry. Model errors can easily be conflated when attempting to quantify deficiencies in this multiphysics context where many individual models are coupled. This work introduces a new technique for isolating these errors through the creation of a flamelet-based direct numerical simulation (DNS) of a nonpremixed, temporally-evolving, planar, reacting jet. DNS is a technique which resolves all lengthscales and timescales of the turbulent flow, providing high-quality data for model development but at a significant computational cost. In the turbulent combustion context, the turbulence-chemistry interaction is also fully resolved. By closing the DNS with a steady laminar flamelet representation, a typical chemical reactions model for RANS-based simulations, RANS turbulence closures and turbulence-chemistry interaction models can be evaluated in isolation through a priori testing. Conversely, by comparing the flamelet DNS to a second DNS employing a higher-fidelity chemistry model, the flamelet closure and its impact on the flame's evolution can be interrogated directly. To obtain the DNS data, a novel algorithm for solving the variable-density, low-Mach Navier-Stokes equations extending the method of Kim, Moin, and Moser for incompressible flow is detailed here. It is a pseudospectral Fourier/B-spline collocation approach which obtains second order accuracy in time and numerical stability for large density ratios with an efficient, matrix-free, iterative treatment of the scalar equations. The a posteriori comparisons of the flamelet DNS and the complex chemistry DNS suggest the flamelet model can significantly alter the evolution of the mean state of the reacting jet; however, violations of global conservation were identified in the complex chemistry DNS. Therefore, no strong conclusions can be made about the chemical reactions model from the comparisons. Significant shortcomings have been identified in the a priori evaluations of the aforementioned RANS closures for turbulent transport, scalar mixing, and turbulence-chemistry interaction, where the flamelet model is taken to be exact. Finally, a flawed assumption in the steady laminar flamelet approach has been directly linked to nonphysical behavior of the density for small values of the scalar dissipation rate.Item Development of a computational-experimental model to predict glioma response to radiation treatment(2021-11-29) Liu, Junyan, Ph. D.; Yankeelov, Thomas E.; Yeh, Hsin-chih; Brock, Amy; Vasquez, Karen M.Radiation is essential to malignant glioma and glioblastoma treatment. However, the prognosis of glioblastoma remains poor with a median survival of 15 months. This is partly due to the heterogeneous radiosensitivity among patients. If we have a mechanism-based model that can make dynamic predictions, it has the potential to guide and optimize the treatment on a patient-specific basis. The purpose of this dissertation is to develop and validate a computational-experimental model that explicitly incorporates underlying radiobiology as well as making accurate predictions of the radiation response of glioma cells. Specifically, we first propose a mathematical model to a single dose of radiation that incorporates DNA repair and cell death pathways and validate it under eight different doses from 2 Gy to 16 Gy via microscopy in vitro. We then extend this model to fractionated treatment and validate it with six different fractionation schemes using total doses of either 16 Gy or 20 Gy. Finally, we propose a data assimilation framework that will individualize the prediction based on the observations of individual replicates, which further improves the prediction accuracy. We present a full story of how developing a mechanism-based experiment-driven mathematical model can assist us in characterizing and predicting radiation response, which could eventually, be used to optimize the treatment schedule.Item DPG methods for nonlinear fiber optics(2018-06-13) Nagaraj, Sriram; Demkowicz, Leszek; Caffarelli , Luis A; Engquist, Bjorn; Bui-Thanh, Tan; Simmons, Christopher; Babuska, Ivo MIn recent years, the Discontinuous Petrov-Galerkin (DPG) method has been the subject of significant study. It comes with a collection of desirable properties, including uniform/mesh independent stability, localizable test norms via broken test spaces, and a canonical error indicator that is incorporated as part of the solution. In this work, the DPG method is applied to problems arising in fiber optics. Accurate modeling of wave propagation in nonlinear media is an important task in fiber optics applications. Nonlinear Maxwell equations in the context of optical fibers have been studied extensively in the past. Analysis of these intensity-dependent nonlinearities are based on several simplifying approximations which result in a nonlinear Schrodinger (NLS) type equation. The Schrodinger equation from a spacetime DPG perspective is discussed. In particular, a 2nd order L² stable ultraweak formulation of the Schrodinger equation is constructed by introducing the notion of an auxiliary boundary operator. This theoretical device requires an operator-specific conforming element to develop optimal convergence rates. Numerical studies show how, modulo (expected) roundoff issues, the theoretical convergence rates are delivered. Next, the use of the DPG method in modeling and simulating optical fiber laser amplifiers with nonlinear Raman gain is studied. In this application, the interaction of two time harmonic electromagnetic fields (the signal and pump fields) governed by two weakly coupled nonlinear Maxwell equations results in the amplification phenomenon. A novel Raman gain model for describing the phenomenon is proposed and an ultra weak DPG formulation is used for the discretization of the proposed model. The nonlinearity is handled by using simple iterations between the two systems. DPG implementation of a perfectly matched layer (PML) at the exit end of the fiber is essential in this model, as is the use of sum factorization for element computations. The presented results show that the signal field indeed gains power along the fiber, thereby justifying the use of the model. Auxiliary results presented in this dissertation include the construction of DPG Fortin operators for 2nd order problems.Item Long-term dynamics and theoretical considerations in coral holobiont adaptation(2024-05) Scott, Carly Brielle ; Matz, Mikhail V.; Ostling, Annette; Narasimhan, Vagheesh M; Lauren TothCorals are predicted to adapt to environmental change in two ways: through the nuclear evolution of the host and/or through acquiring new, more suitable symbiotic partners. Given the rapid rate of climate change, considerable attention has focused on the latter, as new symbiotic partnerships could, in theory, be formed within one coral generation. Underlying this idea are the assumptions that coral-holobiont partnerships are flexible in response to the environment, and that their flexibility benefits the host. However, the contributions of host genetics, local environment, and time in structuring the holobiont have not been estimated in tandem. This work directly addresses these factors across extended temporal scales through: (1) establishing the feasibility of ancient (6,000-year-old) DNA in coral reef research, (2) determining what structures the holobiont in centuries-old massive corals, and (3) providing a novel theoretical framework for why holobiont shifts occur. From this research, it is clear that host genetics play a strong role in structuring the coral holobiont, suggesting host evolution is essential for reef persistence in future conditions. However, despite many holobiont members remaining present over thousands of years, the local environment and age class have a significant role in structuring the bacterial portion of the coral holobiont. Finally, theoretical modeling of coral bleaching suggests that observed shifts in coral-holobiont partnerships may arise not from adaptation to increasing thermal stress, but rather underlying changes in host demography. Together, these results demonstrate that the evolutionary potential of the coral host cannot be overlooked, as holobiont reassembly alone will likely not create more robust reefs.Item Mathematical modeling of epidemic surveillance(2019-09-16) Chen, Xi, Ph. D.; Meyers, Lauren Ancel; Hasenbein, John; Sarkar, Purnamrita; Mueller, PeterMy thesis focus on three aspects of epidemic surveillance: Estimation of the probability and corresponding uncertainty analysis for disease to be imported into multiple geographic regions (Chapter 1); Estimation of the transmission of disease after local transmission established (Chapter 2); Prevalence and corresponding confidence interval estimation incorporating individual level test sensitivity and specificity (Chapter 3). The maximum entropy model, a commonly used species distribution model (SDM) normally combines observations of the species occurrence with environmental information to predict the geographic distributions of animal or plant species. However, it only produces point estimates for the probability of species existence. To understand the uncertainty of the point estimates, we analytically derived the variance of the outputs of the maximum entropy model from the variance of the input in chapter 1. We applied the analytic method to obtain the standard deviation of dengue importation probability and Aedes aegypti suitability. Dengue occurrence data and Aedes aegypti mosquito abundance data, combined with demographic and environmental data, were applied to obtain point estimates and the corresponding variance. To address the issue of not having the true distributions for comparison, we compared and contrasted the performance of the analytical expression with the bootstrap method and Poisson point process model which proved of equivalence of maximum entropy model with the assumption of independent point locations. Both Dengue importation probability and Aedes aegypti mosquito suitability examples show that the methods generate comparatively the same results and the analytic method we introduced is dramatically faster than the bootstrap method and directly apply to maximum entropy model. Infectious diseases such as influenza progress quickly potentially reaching large parts of populations. Accurately estimating the parameters of the infectious disease progression model can efficiently help health organization determine the progression and severity of the disease and response properly and quickly. In chapter 2, we studied the application of 2 basic particle filter methods popularly used — Bootstrap Filter and Auxiliary Particle Filter — in estimating the parameters in infectious disease progression models which are non-linear in nature. We propose a posterior particle filter algorithm and two single statistic posterior particle filter algorithms to enhance handling outliers in data. The posterior particle filter algorithm and the two single statistic posterior particle filter algorithms are shown to out-perform the traditional bootstrap and auxiliary particle filters in terms of accurately and consistently estimating the parameters in compartmental SIR models. Besides, we proposed a re-sampling algorithm and compare it with the current popularly used re- sampling algorithm to show the importance of the re-sampling algorithm in helping improving the consistency of the particle filters. Dengue is currently diagnosed using test algorithm determined by number of days after illness onset which cause the challenge of prevalence estimation as the sensitivity and specificity level of patients varies with different RNA and antibody level. In Chapter 3, we tried to address the challenge of adjusting the estimated prevalence and propose the way of estimating corresponding confidence interval incorporating the individual level sensitivity and specificity. We compared sensitivity, specificity for individual level benefits and average estimation errors and precision for surveillance purpose of both using single test and possible combination of multiple tests. Prevalence estimation adjustment can correct all test combinations. Using immunoassays targeting DENV nonstructural protein (NS1), the combination the NS1 and and IgM-capture immunoassays (ELISA) and the combination of NS1 and real-time reverse transcription polymerase chain reaction (RT-PCR) can statistically significant improving sensitivity of the tests without sacrificing the specificity and narrowing the confidence interval of prevalence estimation.Item Mathematical modeling of flow through vegetated regions(2013-08) Mattis, Steven Andrew; Dawson, Clinton N.Understanding flow processes of sea and fresh water through complex coastal regions is of utmost importance for a number of applications of interest to the scientific and engineering community, including wetland health and restoration, inland flooding due to tropical storms and hurricanes, and navigation through coastal waters. In such regions, the existence of vegetation increases flow resistance, which is a major factor in determining velocity and water level distribution in wetlands and inland. Commonly, the momentum loss due to vegetation is included in a bottom friction term in the model equations; however, such models may oversimplify the complex resistance characteristics of such a system. With recent increases in computational capabilities, it is now feasible to develop and implement more intricate resistance models that more accurately capture these characteristics. We present two methods for modeling flow through vegetated regions. With the first method, we employ mathematical and computational upscaling techniques from the study of subsurface flow to parametrize drag in a complex heterogeneous region. These parameterizations vary greatly depending on Reynolds number. For the coastal flows in which we are interested the Reynolds number at different locations in the domain may vary from order 1 to order 1000, so we must consider laminar and fully turbulent flows. Large eddy simulation (LES) is used to model the effects of turbulence. The geometry of a periodic cell of vegetative obstacles is completely resolved in the fluid mesh with a standard no-slip boundary condition imposed on the fluid-vegetation boundaries. The corresponding drag coefficient is calculated and upscaling laws from the study of inertial flow through porous media are used to parametrize the drag coefficient over a large range of Reynolds numbers. Simulations are performed using a locally conservative, stabilized continuous Galerkin finite element method on highly-resolved, unstructured 2D and 3D meshes. The second method we present is an immersed structure approach. In this method, separate meshes are used for the fluid domain and vegetative obstacles. Taking techniques from immersed boundary finite element methods, the effects of the fluid on the vegetative structures and vice versa are calculated using integral transforms. This method allows us to model flow over much larger scales and containing much more complicated obstacle geometry. Using a simple elastic structure model we can incorporate bending and moving obstacles which would be extremely computationally expensive for the first method. We model flexible vegetation as thin, elastic, inextensible cantilever beams. We present two numerical methods for modeling the beam motion and analyze their computational expense, stability, and accuracy. Using the immersed structure approach, a fully coupled steady-state fluid-vegetation interaction model is developed as well as a dynamic interaction model assuming dynamic fluid flow and quasi-static beam bending. This method is verified using channel flow and wave tank test problems. We calculate the bulk drag coefficient in these flow scenarios and analyze their trends with changing model parameters including stem population density and flow Reynolds number. These results are compared to well-respected experimental results. We model real-life beds of Spartina alterniflora grass with representative beds of flexible beams and perform similar comparisons.Item Mechanical and Hydraulic Behavior of Acid Fractures - Experimental Studies and Mathematical Modeling(1997-12) Gong, Ming; Hill, Daniel A.Acid fracturing is a well stimulation method commonly used in carbonate reservoirs. In the process, an HCl solution, sometimes viscosified or emulsified, is injected into the formation above the fracture pressure to create a fracture or to open existing natural fractures. Acid etches the fracture faces unevenly, leaving a conductive pathway for reservoir fluids to flow into the wellbore. The key to a successful acid fracturing is the achievement of acid penetration and the creation of sufficient fracture conductivity. Much research has been done to study the acid penetration in acid fracturing. However, the hydraulic conductivity created by acid etching is not well understood. There is an empirical correlation available to evaluate acid fracture conductivity, which was reported by Nierode and Kruk over 20 years ago. Acid etching is a stochastic process and the resulting hydraulic mechanisms of acid fractures are complex. The conductivity is affected by the aperture and contact area of the fracture under closure stress. The damage of the rock strength at the fracture surfaces by acid adds complexity to the prediction of hydraulic conductivity of acid fractures. The leakoff of acid into the formation through the fracture faces makes the situation even more complex. Acid contact time, acid leakoff, rock mechanical properties, and formation heterogeneity all affect the creation of hydraulic conductivity of an acid fracture. This work explores the mechanisms of hydraulic conductivity of acid fracture in two ways. The first is a systematic experimental study of the creation of acid fracture conductivity, including characterization of surface roughness created by acid etching, investigation of the damage of rock compressive strength by acidizing, and measurement of hydraulic conductivity under closure stress. To study the effect of rock mechanical properties on the creation of hydraulic conductivity of acid fracture, important mechanical properties of the rock sample have been carefully measured. In order to understand the damage of rock strength by acid, the microstructures at the grain scale of core samples have been examined. Experimental data have shown that longer acid contact results in rougher fracture surface and, in tum, higher hydraulic conductivity. The second focus of this work is the mathematical modeling of acid fracture conductivity. Several different theoretical models for fracture conductivity have been reviewed and examined. Based on our experimental results, a new fracture deformation model was derived with a consideration of both the surface roughness and the rock mechanical properties. The roughness of acid etched surfaces as well as the rock strength have been correlated to acidizing conditions. The fracture closure under stress is modeled with. the plastic deformation of asperities. Finally, a cubic law is used to calculate the fracture conductivity. The prediction of acid fracture conductivity using this model with appropriate parameters shows excellent agreement with experimental data.Item Modeling the impact of interventions to curb the spread of the coronavirus disease 2019 and influenza(2022-11-30) Pasco, Rémy; Meyers, Lauren Ancel; Bollapragada, Raghu; Hasenbein, John; Morton, DavidThe COVID-19 pandemic has brought to the forefront of society the urgent need to better our understanding of infectious diseases, including how they spread and what can be done to control their transmission. In my dissertation I use mathematical models of infectious diseases to analyze the impact and efficacy of various interventions - pharmaceutical and non-pharmaceutical - on the spread of respiratory diseases, focusing on COVID-19 and influenza. The results of my analyses can be used to inform public health measures and guide decision makers in allocating, or prioritizing, scarce resources in order to reduce the burden inflicted by pandemics or seasonal epidemics caused by infectious diseases. First, I use a compartmental model of COVID-19 stratified by age and risk groups in order to analyze the impact of exempting construction workers from stringent social distancing measures enacted early on in the pandemic. In particular, I estimate the increase in disease transmission and hospitalizations in the broader metropolitan area stemming from this exemption, as well as the relative risk incurred by the construction workers themselves. Next, I study the spread of COVID-19 in schools with an agent-based model. Using this detailed model I present a data-driven approach for allocating surveillance tests among schools to reduce both overall infections and disparities among schools that differ in their in-school transmission risk and rate of disease introduction. Using the same model I quantify and compare the impacts on disease burden and in-person learning of the various interventions available to schools, namely proactive testing, vaccination, face coverings, and various quarantine strategies. Finally, I analyze the consequences of widespread use of antivirals for treatment of influenza, weighing their capacity to reduce morbidity and mortality with the risk of treatment-resistance emergence. I carry out this analysis using an individual-based model with a realistic network structure that is calibrated using results from clinical trials of the antiviral baloxavir marboxil in adults and pediatric patients with influenza.Item Variation in tick host preference and its epidemiological impact(2014-12) Pierce, Kelly Anne; Meyers, Lauren Ancel; Sarkar, Sahotra; Bolnick, Daniel; Leibold, Mathew; Miller, Jennifer; Williamson, PhillipTick-borne pathogens pose a significant health risk to humans and wildlife. The complex interactions between ticks and their hosts make management of tick-borne pathogens particularly challenging. Many of the most common species of ticks feed on a wide variety of hosts, but transmit pathogens that are only capable of infecting a narrow range of susceptible host species. Prior research has focused on understanding which tick hosts are capable of serving as pathogen reservoir hosts by carrying and transmitting tick- borne pathogens. However, relatively little attention has been given to studying how ticks choose their hosts. Host choice is of particular importance to the epidemiology of tick- borne pathogens when not all hosts are pathogen reservoirs. My dissertation research investigates the nature of host choice and its impact on disease prevalence in two tick species with similar life histories and host ranges: the lone star tick (Amblyomma americanum) and the American dog tick (Dermacentor variabilis). I conducted an experiment to demonstrate that lone star ticks can respond to host scent. Certain host scents, including those from some individual opossums and raccoons, are attractive to ticks. Proximity to scent also influences tick movement. I also looked for evidence that American dog tick populations are genetically structured by host species identity, and found that certain tick genotypes correlate with host species. This suggests that these ticks may have heritable host preferences that influence their feeding behaviors. Finally, I used a mathematical model to predict disease transmission probability and lone star tick preference for reservoir hosts. I considered hypothetical wildlife communities with different reservoir host relative abundances, and found that changes in relative abundance influence both disease transmission probability and tick host preference estimates. The model also suggests that lone star ticks must parasitize reservoir hosts more frequently when those hosts are less common. These results highlight the importance of host choice and host community composition as determinants of tick-borne disease prevalence.