# Browsing by Subject "Numerical methods"

Now showing 1 - 14 of 14

- Results Per Page
1 5 10 20 40 60 80 100

- Sort Options
Ascending Descending

Item A compact circumstellar shell model as a source of high velocity features in Type Ia supernovae(2018-08-10) Mulligan, Brian William; Wheeler, J. Craig; Bromm, Volker; Milosavljevic, Milos; Robinson, Edward L; Vinko, JozsefShow more High-velocity features (HVF) of Type Ia supernovae are found in calcium, silicon, iron, and other elements. These features are observed only at early epochs — generally within the first one to three weeks after the supernova explodes. These features can provide insight into the cause of the supernova or the composition of the system prior to the explosion. We consider a model in which the HVF are the result of an interaction between the supernova and compact circumstellar shell. Hydrodynamic simulations of the interaction are performed, and synthetic spectra are generated using syn++ from the results of the simulation to explore the effect of the shell and interaction. Of the masses, sizes, and density profiles of the shell that we consider, only density profiles that have a steep gradient at the outer edge may be ruled out based on the shape and evolution of the Ca II near-infrared feature. The observed features of SN 2011fe are then fit using these models, finding that the mass of the shell is not well constrained, but the velocity of the feature near peak brightness is better explained by a shell with a mass of 0.005 M ⊙ . I describe a method to estimate the calcium abundance in supernova or shell material based on the results of the fit assuming purely radiative excitation and ionization, and simultaneously solving the excitation- and ion-states for a given element. Possible compositions of the shell and ejecta are explored using tardis for spectral synthesis, considering shells that consist of a solar abundance of metals within a hydrogen, helium, or carbon and oxygen substrate, as well compositions based upon a helium envelope detonation. The material in the shell is more likely to contain a Solar or super-Solar abundance of calcium; the substrate within which this calcium lies is not detectable using spectra.Show more Item A Boundary Element Method for the strongly nonlinear analysis of ventilating water-entry and wave-body interaction problems(2009-08) Vinayan, Vimal; Kinnas, Spyros A.Show more A two-dimensional Boundary Element Method (BEM) is developed to study the strongly nonlinear interaction between a surface-piercing body and the free-surface. The scheme is applied to problems with and without the possibility of ventilation resulting from the motion and geometric configuration of the surface-piercing body. The main emphasis of this research work is on the development of numerical methods to improve the performance prediction of surface-piercing propellers by including the whole range of free-surface nonlinearities. The scheme is applied to predict the ventilated cavity shapes resulting from the vertical and rotational motion of a blade-section with fully nonlinear free-surface boundary conditions. The current method is able to predict the ventilated cavity shapes for a wide range of angles of attack and Froude numbers, and is in good agreement with existing experimental results. Through a comparison with a linearized free-surface method, the current method highlights the shortcomings of the negative image approach used commonly in two-dimensional and three-dimensional numerical methods for surface-piercing hydrofoils or propellers. The current method with all its capabilities makes it a unique contribution to improving numerical tools for the performance prediction of surface-piercing propellers. The scheme is also applied to predict the roll and heave dynamics of two-dimensional Floating Production Storage and Offloading (FPSO) vessel hull sections within a potential flow framework. The development of the potential flow model is aimed at validating the free-surface dynamics of an independently developed Navier Stokes Solver for predicting the roll characteristics of two-dimensional hull sections with bilge keels.Show more Item Connecting the dots : tracking galaxy evolution using constant cumulative number density at 3(2015-12) Jaacks, Jason Dale; Finkelstein, Steven L.; Bromm, VolkerShow more Using the cosmological smoothed particle hydrodynamical code GADGET-3 we make a realistic assessment of the technique of using constant cumulative number density as a tracer of galaxy evolution. We find that over a redshift range of 3Show more Item Decision-making frameworks for practical industrial applications in optimal process design and control(2021-08-16) Costandy, Joseph Gamal Nessim; Baldea, Michael; Edgar, Thomas F.; Beaman, Joseph; Bonnecaze, Roger; Rochelle, GaryShow more While economics are the driving force behind many of the decisions made by industrial stakeholders, the methodologies employed to make high-level decisions often utilize heuristics that may not be quantitatively optimal. In this dissertation, I develop optimization-based frameworks that enable quantitatively driven high-level decision-making for two problems of practical industrial significance. In the first part of the dissertation, I address the problem of deciding the operating mode (batch or continuous-flow) of a chemical process, while taking into account the fundamental differences in the natures of the two operating modes (such as the batch advantage of utilizing reactors for the manufacture of multiple products, or the batch disadvantage of reactor cleanup between campaigns), the size and cost of the respective reactor units, and the potential use of reactor networks to optimize performance. I develop a first-principles-based non-dimensionalization algorithm that unifies the model for all reactor types and chemical systems from the two operating modes which enables direct performance comparisons between reactors of the two operating modes. In addition, I introduce a novel discretization method, the orthogonal collocation on finite elements for reactors (OCFERE), that allows the consideration of networks of reactors of either of the two operating modes, and I unify the description of the economics of the two operating modes. This results in a framework that encompasses the solution of a single optimization problem to make the decision about operating mode and find the optimal reactor network design. In the second part of the dissertation, I address the problem of quantifying the monetary value of improvements in process control. While methods have been developed for quantifying the value of control in the case of predominantly steady-state processes, there has been no attempt to quantify the monetary value of control for predominantly transient processes. I first review the problem, and highlight the relationship between optimal scheduling and process control for transient processes. Then, I utilize the general framework of integrated scheduling and control to develop novel performance functions that enable the quantification of the monetary value of control from a scheduling perspective for a predominantly transient process. I posit that the transition time between one product and the next in a production sequence can be used as a performance metric over which the value of control can be quantified.Show more Item Development of mobile platform for inventory and inspection applications in nuclear environments(2020-05) Anderson, Robert Blake; Landsberger, Sheldon; Pryor, Mitch; Haas, Derek; Sulzer, James; Karadkar, UnmilShow more Worker radiation exposure is a major constraint in nuclear facilities, but advances in robotics have the potential to produce novel solutions to this problem beyond the traditional methods of the nuclear field. This dissertation details recent work towards this goal within the Nuclear and Applied Robotics Group of The University of Texas at Austin. The primary contributions relate to autonomous, mobile radiation surveying and statistical analysis of acquired data to reach quantitative conclusions about radioactive sources in the survey environment. This work has produced a mobile robotic platform capable of performing an autonomous radiation survey routine over a commanded area of a nuclear facility. If signs of anomalous radiation sources are detected in the area, the system possesses features to locate and characterize them. This procedure utilizes a method of statistically analyzing the data using Recursive Bayesian Estimation. This process is supported by isotopic identification via gamma spectroscopy, which enables efficiency improvements to the method. The developed implementation makes significant improvements to prior literature in locating radiation sources. The practical result is that radiation fields can be characterized with fewer total measurements and with greater mathematical robustness, all without human presence in the radiation area.Show more Item Discontinuous Galerkin methods for resolving non linear and dispersive near shore waves(2014-05) Panda, Nishant; Dawson, Clinton N.Show more Near shore hydrodynamics has been an important research area dealing with coastal processes. The nearshore coastal region is the region between the shoreline and a fictive offshore limit which usually is defined as the limit where the depth becomes so large that it no longer influences the waves. This spatially limited but highly energetic zone is where water waves shoal, break and transmit energy to the shoreline and are governed by highly dispersive and non-linear effects. An accurate understanding of this phenomena is extremely useful, especially in emergency situations during hurricanes and storms. While the shallow water assumption is valid in regions where the characteristic wavelength exceeds a typical depth by orders of magnitude, Boussinesq-type equations have been used to model near-shore wave motion. Unfortunately these equations are complex system of coupled non-linear and dispersive differential equations that have made the developement of numerical approximations extremely challenging. In this dissertation, a local discontinuous Galerkin method for Boussinesq-Green Naghdi Equations is presented and validated against experimental results. Currently Green-Naghdi equations have many variants. We develop a numerical method in one horizontal dimension for the Green-Naghdi equations based on rotational characteristics in the velocity field. Stability criterion is also established for the linearized Green-Naghdi equations and a careful proof of linear stability of the numerical method is carried out. Verification is done against a linearized standing wave problem in flat bathymetry and h,p (denoted by K in this thesis) error rates are plotted. The numerical method is validated with experimental data from dispersive and non-linear test cases.Show more Item An efficient solution procedure for simulating phonon transport in multiscale multimaterial systems(2013-05) Loy, James Madigan; Murthy, JayathiShow more Over the last two decades, advanced fabrication techniques have enabled the fabrication of materials and devices at sub-micron length scales. For heat conduction, the conventional Fourier model for predicting energy transport has been shown to yield erroneous results on such length scales. In semiconductors and dielectrics, energy transport occurs through phonons, which are quanta of lattice vibrations. When phase coherence effects can be ignored, phonon transport may be modeled using the semi-classical phonon Boltzmann transport equation (BTE). The objective of this thesis is to develop an efficient computational method to solve the BTE, both for single-material and multi-material systems, where transport across heterogeneous interfaces is expected to play a critical role. The resulting solver will find application in the design of microelectronic circuits and thermoelectric devices. The primary source of computational difficulties in solving the phonon BTE lies in the scattering term, which redistributes phonon energies in wave-vector space. In its complete form, the scattering term is non-linear, and is non-zero only when energy and momentum conservation rules are satisfied. To reduce complexity, scattering interactions are often approximated by the single mode relaxation time (SMRT) approximation, which couples different phonon groups to each other through a thermal bath at the equilibrium temperature. The most common methods for solving the BTE in the SMRT approximation employ sequential solution techniques which solve for the spatial distribution of the phonon energy of each phonon group one after another. Coupling between phonons is treated explicitly and updated after all phonon groups have been solved individually. When the domain length is small compared to the phonon mean free path, corresponding to a high Knudsen number ([mathematical equation]), this sequential procedure works well. At low Knudsen number, however, this procedure suffers long convergence times because the coupling between phonon groups is very strong for an explicit treatment of coupling to suffice. In problems of practical interest, such as silicon-based microelectronics, for example, phonon groups have a very large spread in mean free paths, resulting in a combination of high and low Knudsen number; in these problems, it is virtually impossible to obtain solutions using sequential solution techniques. In this thesis, a new computational procedure for solving the non-gray phonon BTE under the SMRT approximation is developed. This procedure, called the coupled ordinates method (COMET), is shown to achieve significant solution acceleration over the sequential solution technique for a wide range of Knudsen numbers. Its success lies in treating phonon-phonon coupling implicitly through a direct solution of all equations in wave vector space at a particular spatial location. To increase coupling in the spatial domain, this procedure is embedded as a relaxation sweep in a geometric multigrid. Due to the heavy computational load at each spatial location, COMET exhibits excellent scaling on parallel platforms using domain decomposition. On serial platforms, COMET is shown to achieve accelerations of 60 times over the sequential procedure for Kn<1.0 for gray phonon transport problems, and accelerations of 233 times for non-gray problems. COMET is then extended to include phonon transport across heterogeneous material interfaces using the diffuse mismatch model (DMM). Here, coupling between phonon groups occurs because of reflection and transmission. Efficient algorithms, based on heuristics, are developed for interface agglomeration in creating coarse multigrid levels. COMET is tested for phonon transport problems with multiple interfaces and shown to outperform the sequential technique. Finally, the utility of COMET is demonstrated by simulating phonon transport in a nanoparticle composite of silicon and germanium. A realistic geometry constructed from x-ray CT scans is employed. This composite is typical of those which are used to reduce lattice thermal conductivity in thermoelectric materials. The effective thermal conductivity of the composite is computed for two different domain sizes over a range of temperatures. It is found that for low temperatures, the thermal conductivity increases with temperature because interface scattering dominates, and is insensitive to temperature; the increase of thermal conductivity is primarily a result of the increase in phonon population with temperature consistent with Bose-Einstein statistics. At higher temperatures, Umklapp scattering begins to take over, causing a peak in thermal conductivity and a subsequent decrease with temperature. However, unlike bulk materials, the peak is shallow, consistent with the strong role of interface scattering. The interaction of phonon mean free path with the particulate length scale is examined. The results also suggest that materials with very dissimilar cutoff frequencies would yield a thermal conductivity which is closest to the lowest possible value for the given geometry.Show more Item Fast methods to model the response of fluid-filled fractures and estimate the fracture properties(2018-11-21) Alulaiw, Badr Abdullah; Sen, Mrinal K.; Spikes, Kyle T; Fomel, Sergey; Grand, Stephen P; Foster, DouglasShow more Estimation of fracture orientation and properties has become an important part of seismic reservoir characterization especially in unconventional reservoirs because of the crucial role of fractures in enhancing the permeability in tight reservoirs. The presence of fluid inside the fractures affects their seismic response. Using equivalent medium theories, seismic wave signatures such as Amplitude Variation with Offset and azimuth (AVOz), Normal Moveout (NMO) correction and shear waves splitting have been used to detect the presence of gas-filled and fluid-filled fractures. These methods, however, are unable to specify the type of fluid inside the fractures and cannot be used for thin beds and complex geology where the subsurface properties change laterally. Hence, modeling the seismic waveform using numerical methods is inevitable. The main limitation of those methods is their high computation costs. In this dissertation, I focus on developing two fast numerical methods to model the response of fluid-filled fractures as well as one fast global optimization method to estimate the fracture properties. Although local optimization methods are computationally cheap, the probability of being trapped in a local minimum becomes high when the initial model is not close to the global minimum especially when applied to highly nonlinear problems. Quantum Annealing (QA) is a recent global optimization method that was shown to be faster than Simulated Annealing (SA) in many situations. QA has been recently applied to geophysical problems. In this research, I modify QA by proposing using a new kinetic term that helps QA converge faster to the global minimum. With a synthetic dataset, I illustrate that QA is faster than Very Fast Simulated Annealing (VFSA) using a highly non-linear forward model that computes the response of seismic Amplitude Variation with Angle (AVA) for spherical waves. Most AVA inversion algorithms are based on plane wave solutions whereas seismic surveys use point sources to generate spherical waves. Although the plane wave solution is an excellent approximation for spherical waves, this approximation breaks down in the vicinity of the critical angle. Here, I implement an AVA inversion method for three parameters (P-wave velocity, S-wave velocity and density) based on analytical approximation for spherical waves. In addition, I apply this algorithm to a 2D seismic dataset from Cana field, Oklahoma with the primary objective of resolving the Woodford formation. I compare the results with those obtained by a local optimization method. The results clearly demonstrate superior performance of the proposed inversion method over that of local optimization. Specifically, the inverted images show clear delineation of the Woodford formation. For a reservoir containing vertical and rotationally invariant fractures, the linear slip model characterizes the reservoir using four properties: two elastic properties describing the isotropic host rock and two fracture properties – normal ΔN and tangential ΔT fracture weaknesses. This model, however, ignores the pore porosity effect on the anisotropy and hence the fracture properties might be inaccurate. In this work, I estimate the fracture properties as well as pore porosity using a new expression for the stiffness tensor for a porous fractured medium. I use the ray-Born approximation to calculate the seismic response of a laterally varying porous reservoir and QA to estimate the fracture properties. Using numerical experiments, I compare the inversion results from both unconstrained and constrained simultaneous (PP and PSV components) seismic inversion as well as constrained inversion using only the PP component. I explain the importance of including a constraint to mitigate the effect of the equivalence problem between ΔN and porosity. Unlike the unconstrained inversion, the estimated properties from the constrained inversion are acceptable. Also, I illustrate that the simultaneous constrained inversion is more robust than using the PP component alone. I apply this algorithm to a 3D multicomponent seismic dataset acquired in Saudi Arabia. The estimated fracture orientation agrees with those obtained in previous studies using borehole image logs, oriented cores, drilling observation and seismic in the same area. Also, the computed porosity using available well logs matches the inverted porosity very well. Computationally cheap analytical methods and equivalent medium theories available to model seismic wavefields diffracted by multiple fluid-filled fractures are not capable of handling complex fracture models or wave multi-scattering. Hence, using expensive numerical methods is inevitable. The advantages of boundary element method (BEM) over domain methods, such as finite difference and finite element methods, include the ease of handling irregular fracture geometry and reduction of the problem dimensions making the computation fast. Moreover, BEM models the complete wavefield including multiples, reverberations and refracted waves inside the fractures. The downside of BEM is that the computation cost increases rapidly whenever we increase the number of boundary elements making these methods computationally inefficient to model a large number of 2D cracks or 3D fractures. By combining the Indirect Boundary Element Method (IBEM) and a Generalized Born Series (GBS), I propose a new algorithm that can compute the response of 3D fluid-filled fracture sets effectively. In addition, when I consider equally spaced fractures that have the same geometry within a fracture set, computation can be performed even more rapidly. I compare the wavefield obtained using this approximation in five numerical experiments with those obtained from IBEM and show that the results are accurate in many situations.Show more Item Global convection in Earth's mantle : advanced numerical methods and extreme-scale simulations(2019-02-06) Rudi, Johann; Ghattas, Omar N.; Stadler, Georg, Ph. D.; Gurnis, Michael; Ren, Kui; Biros, George; Hesse, MarcShow more The thermal convection of rock in Earth's mantle and associated plate tectonics are modeled by nonlinear incompressible Stokes and energy equations. This dissertation focuses on the development of advanced, scalable linear and nonlinear solvers for numerical simulations of realistic instantaneous mantle flow, where we must overcome several computational challenges. The most notable challenges are the severe nonlinearity, heterogeneity, and anisotropy due to the mantle's rheology as well as a wide range of spatial scales and highly localized features. Resolving the crucial small scale features efficiently necessitates adaptive methods, while computational results greatly benefit from a high accuracy per degree of freedom and local mass conservation. Consequently, the discretization of Earth's mantle is carried out by high-order finite elements on aggressively adaptively refined hexahedral meshes with a continuous, nodal velocity approximation and a discontinuous, modal pressure approximation. These velocity--pressure pairings yield optimal asymptotic convergence rates of the finite element approximation to the infinite-dimensional solution with decreasing mesh element size, are inf-sup stable on general, non-conforming hexahedral meshes with "hanging nodes,'' and have the advantage of preserving mass locally at the element level due to the discontinuous pressure. However, because of the difficulties cited above and the desired accuracy, the large implicit systems to be solved are extremely poorly conditioned and sophisticated linear and nonlinear solvers including powerful preconditioning techniques are required. The nonlinear Stokes system is solved using a grid continuation, inexact Newton--Krylov method. We measure the residual of the momentum equation in the H⁻¹-norm for backtracking line search to avoid overly conservative update steps that are significantly reduced from one. The Newton linearization is augmented by a perturbation of a highly nonlinear term in mantle's rheology, resulting in dramatically improved nonlinear convergence. We present a new Schur complement-based Stokes preconditioner, weighted BFBT, that exhibits robust fast convergence for Stokes problems with smooth but highly varying (up to 10 orders of magnitude) viscosities, optimal algorithmic scalability with respect to mesh refinement, and only a mild dependence on the polynomial order of high-order finite element discretizations. In addition, we derive theoretical eigenvalue bounds to prove spectral equivalence of our inverse Schur complement approximation. Finally, we present a parallel hybrid spectral--geometric--algebraic multigrid (HMG) to approximate the inverses of the Stokes system's viscous block and variable-coefficient pressure Poisson operators within weighted BFBT. Building on the parallel scalability of HMG, our Stokes solver demonstrates excellent parallel scalability to 1.6 million CPU cores without sacrificing algorithmic optimality.Show more Item Monte Carlo radiation hydrodynamics in the super-Eddington regime(2018-06-27) Tsang, Tsz Ho; Milosavljević, Miloš; Bromm, Volker; Jogee, Shardha; Evans, Neal J.; Davis, ShaneShow more In this dissertation, we present three projects addressing the dynamical importance of radiation in turbulent media with super-Eddington flux. Examples of such media are massive star-forming environments and supermassive star atmospheres. While there are many theoretical models claiming the pivotal role of radiation in driving strong outflows and setting the star formation efficiency in the course of massive star formation, often they are based on ideal geometries and closure relations of the moment equations for radiation. To directly tackle the challenge of numerically modeling radiation-matter interactions in hydrodynamical simulations, we have adopted and tested a hybrid Monte Carlo radiation transport scheme. In the first project, with a standardized two-dimensional radiation-driven winds setup, we show that low-order methods tend to artificially reinforce the development of the low-density channels and underestimate the strength of radiation pressure. The accuracy of any numerical radiation transport scheme in producing truthful dynamics therefore depends on the validity of its underlying assumptions. In the second project, we carry out radiation hydrodynamical simulations of the formation of super star clusters in supersonically turbulent molecular clouds. The gas distribution is strongly inhomogeneous and that reduces the strength of radiation pressure in halting gas accretion compared to previous predictions. In the last project, unlike the inflow-outflow scenario of the first two, we aim to simulate the radiation hydrodynamics in quasi-hydrostatic media with extreme sensitivity of the opacity to density and temperature. We present the implementation and robustness test of the hybrid Monte Carlo estimators in preparation for direct simulations of the convective, radiation-dominated dynamics prevalent in the atmospheres of supermassive stars.Show more Item Numerical discretization effects in large eddy simulation of turbulence(2022-06-15) Yalla, Gopal Robert; Moser, Robert deLancey; Engquist, Björn, 1945-; Ghattas, Omar; Heimbach, Patrick; Oliver, Todd; Willcox , KarenShow more Large eddy simulation (LES) is now over half a century old and while it has become more widely used as computational capabilities have expanded, its adoption as an engineering tool has arguably been limited by the shortcomings of subgrid models. Most current subgrid models are formulated under the assumption that the subgrid scales are approximately isotropic, and that other complications, such as numerical discretization and inhomogeneous resolution, are negligible. This limits the fidelity of the models when applied in complex flows. For LES to become a robust engineering tool, subgrid models applicable to more complex scenarios will be required. In particular, the effects of numerical discretization must be considered. In this thesis we develop several analytical and computational tools for identifying the characteristics of an LES introduced by numerical discretization and filtering. First, the effects of numerical dispersion error on the turbulent energy cascade are explored. It is shown that dispersion error due to convection by a large mean velocity causes a decoherence of the phase relationship among interacting Fourier modes, resulting in a reduction of the energy transfer rate from large to small scales. Nonlinear dispersion error due to convection from turbulent fluctuations is also explored through the development of an eddy-damped quasi-normal markovian (EDQNM) type of analysis that is applicable to the filtered turbulence in an LES. EDQNM is shown to be a useful tool for exploring dispersion effects because it exposes the relaxation rate of the third-order velocity correlations. An explicit filtering formulation based on the properties of the underlying numerics is developed to remove the highly dispersive wavemodes in an LES. Further, the EDQNM LES theory is also used to determine the a priori properties of the subgrid stress needed to recover an inertial range spectrum in the presence of non-spectral numerics and non-cutoff explicit filters. Second, the convection of turbulence through nonuniform grids is explored. This introduces additional challenges due to so-called commutation error, or neglect of the commutator of the filtering and differentiation operators. We employ a multiscale asymptotic analysis to investigate the characteristics of the commutator. Further, we show how commutation error manifests in simulation and demonstrate its impact on the convection of homogeneous isotropic turbulence through a coarsening grid. A connection is made between the commutation error and the propagation properties of the underlying numerics. A framework for modeling this commutator is proposed that accounts for properties of the discretization. The forcing of turbulence convecting through a refining grid is also explored and a formulation based on divergence-free wavelets is proposed. Results in isotropic turbulence suggest this formulation may be effective at energizing newly resolvable scales and therefore allowing for sharper grid transitions to finer resolved regions. There are several additional challenges to formulating more broadly applicable subgrid models for LES and we expect that the techniques developed here will also be useful for addressing these wide range of issues.Show more Item Numerical multiscale methods: from homogenization to milestoning(2024-05) Chen, Ziheng; Engquist, Björn, 1945-; Martinsson, Per Gunnar; Arbogast, Todd J.; Tsai, Richard Yen-Hsi; Bajaj, ChandrajitShow more The dissertation focuses on addressing the challenges posed by multiscale problems in applied mathematics, which stem from the intricate interplay between microscales and the computational demands of resolving fine details. To alleviate this burden, numerical homogenization and averaging methods are favored. This study explores three interconnected topics related to numerical techniques for handling multiscale problems in both spatial and temporal domains. In the first part, we establish the equivalence principle between time averaging and space homogenization. This principle facilitates the application of various numerical averaging techniques, such as FLAVORS, Seamless, and HMM, to boundary value problems. Moreover, we introduce the dilation operator as a decomposition-free approach for numerical homogenization in higher dimensions. Additionally, we utilize the Synchrosqueezing transform as a preprocessing step to extract oscillatory components, crucial for the structure-aware dilation method. The second part extends the Deep Ritz method to multiscale problems. We delve into the scale convergence theory to derive the [Gamma]-limit of energy functionals exhibiting oscillatory behavior. The resulting limit object, formulated as a minimization problem, captures spatial oscillations and can be tackled using existing neural network architectures. In the third part, we lay the groundwork for the milestoning algorithm, a successful tool in computational chemistry for molecular dynamics simulations. We adapt this algorithm to a domain-decomposition-based framework for coarse-grained descriptions and establish the well-posedness of primal and dual PDEs. Additionally, we investigate the convergence rate and optimal milestone placements. We illustrate this framework through the understanding of the Forward Flux algorithm as a specific example.Show more Item Reservoir simulation and optimization of CO₂ huff-and-puff operations in the Bakken Shale(2014-08) Sanchez Rivera, Daniel; Balhoff, Matthew T.; Mohanty, Kishore KumarShow more A numerical reservoir model was created to optimize CO₂ Huff-and-Puff operations in the Bakken Shale. Huff-and-Puff is an enhanced oil recovery treatment in which a well alternates between injection, soaking, and production. Injecting CO₂ into the formation and allowing it to “soak” re-pressurizes the reservoir and improves oil mobility, boosting production from the well. A compositional reservoir simulator was used to study the various design components of the Huff-and-Puff process in order to identify the parameters with the largest impact on recovery and understand the reservoir’s response to cyclical CO₂ injection. It was found that starting Huff-and-Puff too early in the life of the well diminishes its effectiveness, and that shorter soaking periods are preferable over longer waiting times. Huff-and-Puff works best in reservoirs with highly-conductive natural fracture networks, which allow CO₂ to migrate deep into the formation and mix with the reservoir fluids. The discretization of the computational domain has a large impact on the simulation results, with coarser gridding corresponding to larger projected recoveries. Doubling the number of hydraulic fractures per stage results in considerably greater CO₂ injection requirements without proportionally larger incremental recovery factors. Incremental recovery from CO₂ Huff-and-Puff appears to be insufficient to make the process commercially feasible under current economic conditions. However, re-injecting mixtures of CO₂ and produced hydrocarbon gases was proven to be technically and economically viable, which could significantly improve profit margins of Huff-and-Puff operations. A substantial portion of this project involved studying alternative numerical methods for modeling hydraulically-fractured reservoir models. A domain decomposition technique known as mortar coupling was used to model the reservoir system as two individually-solved subdomains: fracture and matrix. A mortar-based numerical reservoir simulator was developed and its results compared to a tradition full-domain finite difference model for the Cinco-Ley et al. (1978) finite-conductivity vertical fracture problem. Despite some numerical issues, mortar coupling closely matched Cinco-Ley et al.'s (1978) solution and has potential applications in complex problems where decoupling the fracture-matrix system might be advantageous.Show more Item Steady-state spherical accretion using smoothed particle hydrodynamics(2011-12) Baumann, Mark Chapple; Matzner, Richard A. (Richard Alfred), 1942-; Dicus, Duane; Klein, Josh; Kopp, Sacha; Marder, MichaelShow more Due to its adaptable nature in a broad range of problem domains, Smoothed Particle Hydrodynamics (SPH) is a popular numerical technique for computing solutions in astrophysics. This dissertation discusses the SPH technique and assesses its capabilities for reproducing steady-state spherically-symmetric accretion flow. The accretion scenario is of great interest for its applicability in a diverse array of astrophysical phenomena and, under certain assumptions, it also provides an accepted analytical solution against which the numerical method can be validated. After deriving the necessary equations from astrophysical fluid dynamics, giving a detailed review of solving the steady-state spherical accretion problem, and developing the SPH methodology, this work suggests solutions to the issues that must be overcome in order to successfully employ the SPH methodology to reproduce steady-state spherical accretion flow. Several techniques for setting initial data are addressed, resolution requirements are illustrated, inner and outer boundary conditions are discussed, and artificial dissipation parameters and methodologies are explored.Show more