Browsing by Subject "Inverse problems"
Now showing 1 - 16 of 16
- Results Per Page
- Sort Options
Item Accelerating inverse solutions with machine learning and randomization(2023-04-18) Wittmer, Jonathan; Bui-Thanh, Tan; Tsai, Yen-Hsi (Richard); Dawson, Clinton; Ghattas, Omar; Sundar, HariInverse problems is a field of applied mathematics that finds wide application in both the scientific community and industry where the objective is to estimate some parameter of interest (PoI) from observations. These two quantities are related by a mapping known as the parameter-to-observable (PtO) map, which may be nonlinear. While the forward problem may be well-posed, the inverse problem is often ill-posed, making parameter estimation a difficult problem. Ill-posedness in the Hadamard sense means that at least one of the following is true: 1) the solution does not exist, 2) the solution is not unique, or 3) the solution does not depend continuously on the data. In cases of interest where the PtO map is an observational operator acting on the solution to a system of PDEs that have been discretized on a domain, the ill-posedness can be severe due to the compact nature of the PtO map. To address the ill-posedness, practitioners often write the solution of the inverse problem as the solution of a regularized least squares optimization problem where the regularization is constructed or designed to combat the ill-posedness, resulting in the existence of a unique solution that depends continuously on the data. There are many classical regularization methods including Tikhonov regularization, total variation (TV) regularization, and nonconvex regularization strategies such as using an l [subscript p] norm with 0 < p < 1. In addition to estimating the PoI only, it is also of interest to estimate the uncertainty. To do this, a Bayesian approach is typically employed where the solution to the inverse problem is a posterior probability density rather than a deterministic quantity. By Bayes theorem, the posterior is proportional to the product of the likelihood and the prior probability density. In the case of Gaussian observational noise and prior, finding the maximum a posteriori (MAP) point is equivalent to solving the regularized least squares optimization problem in weighted norms where the likelihood results in the data misfit term weighted by the inverse of the noise covariance matrix and the prior leads to the regularization term weighted by the inverse of the prior covariance. That is, computing the MAP estimate of the PoI in the Bayesian framework requires solving a deterministic inverse problem, so the apparatus for solving Bayesian inverse problems builds on the algorithms and tools used for solving deterministic inverse problems. This understanding is what enables us to gain insight into the inverse solutions from various methods and to develop new techniques that begin with deterministic inverse problems but can then be analyzed from a statistical perspective and used to quantify uncertainty. Since the likelihood depends on the PtO map, significant emphasis has been placed on developing robust and scalable computational models in past decades along with excellent problem-specific priors. On the other hand, there has been a recent trend to abandon models and embrace the era of big data. We aim to show that neither approach is best and that the surplus of data can be used in concert with classical models to both improve the quality of inverse solutions and to accelerate the solution process using modern machine learning techniques. In this research, we use global full waveform seismic inversion and Poisson’s equation as the prototypical inverse problems. Sparsely located seismogram observations are used to reconstruct the acoustic wave speed for the seismic inverse problem. This inverse problem is constrained by a three-dimensional acoustic wave equation which is a system of time-dependent PDEs discretized on the entire globe. Full waveform inversion is an important problem in science and industry with applications to reservoir characterization and various biomedical imaging problems. We use the adjoint method as the starting point from which we develop several new inversion methods. Seismic inversion is a good prototypical problem because it is a nonlinear inverse problem with high computational cost for which scalable libraries exist, enabling us to study the effectiveness of our methods on practical large-scale inverse problems. Sparsely sampled temperature observations are used to reconstruct the underlying heat conductivity for the Poisson problem on a two-dimensional mesh. Poisson’s equation is an excellent test problem because of the severe ill-posedness of inverting for the conductivity. We propose four new methods for solving PDE constrained inverse problems: 1. The data-informed active subspace (DIAS) regularization approach was developed as an alternative to Tikhonov regularization where the regularization is only applied in directions where the data are least informative. 2. The UQ-VAE framework was developed as a hybrid data/model driven machine learning approach for rapid MAP estimation and uncertainty quantification. 3. An autoencoder based compression strategy was developed to address the high cost of solving large-scale time-dependent inverse problems by eliminating the need for checkpointing. 4. By combining the DIAS approach and autoencoder compression, we aim to provide a comprehensive method for computing a data-informed inverse solution while mitigating the additional computational cost with autoencoder compression, enabling the DIAS method to scale to large problems. Additionally, we develop a unifying theory for the convergence of randomized methods for solving inverse problems and show the effectiveness on the Poisson inverse problem. Contributions to CSEM areas of interest: Area A Applied mathematics: The DI framework was rigorously derived from the truncated singular value decomposition. Its deterministic properties were analyzed from a spectral perspective and we show that the DIAS prior is a valid regularization strategy. Additionally, we analyze the DIAS prior from a statistical perspective and show that for linear inverse problems with Gaussian noise and prior covariances, the posterior covariance of the DIAS solution is bounded below by the Tikhonov posterior covariance and show that Tikhonov regularization results in over-confident uncertainty estimates. The UQ-VAE framework was rigorously derived from minimizing the Jensen-Shannon divergence (JSD) between the true posterior and the model posterior, parameterized by the weights of a deep neural network. We derive an explicit finite sample size loss function when the likelihood, prior, and posterior are all assumed to be Gaussian. We prove both asymptotic convergence and derive a non-asymptotic error bound for a broad family of randomized solutions for linear and nonlinear inverse problems. From this family of randomized methods, we show the equivalence of several existing methods and derive new methods. Area B Numerical analysis and scientific computation: While the DIAS prior has firm mathematical foundations, computing the DIAS regularization cost and corresponding gradient term are expensive, both computationally and in terms of storage. Therefore, we develop and investigate an approximate form of the DIAS prior that allows for the action of the inverse of the DIAS prior covariance matrix to be approximately applied to a vector in a scalable fashion. We also derive and implement a form of the DIAS prior that involves low-rank projections and require substantially less storage than the naive implementation of the DIAS prior would suggest. This approximate algorithm with low-rank projection is implemented on a large-scale seismic inverse problem solved on at least 64 nodes of the Frontera supercomputer, demonstrating that the DIAS regularization is viable, even on large problems. Non-asymptotic error analysis for randomized inverse problems employs techniques from numerical analysis to show that the error of the solution to the perturbed (randomized) optimization problem is bounded with high probability. We explore the convergence of various randomized methods numerically to validate the theoretical convergence properties and provide practical insight into the numerical performance of each method on a variety of problems. An autoencoder based compression strategy for time dependent inverse problems was developed as a scalable substitute for checkpointing. We study two different compression schemes: spatial compression and temporal compression. Each method is implemented and scaled on the Frontera supercomputer. Since the goal of this work is to reduce the wasted computational effort of checkpointing, we require that any proposed approach be faster than the original checkpointing implementation. This requires special care in scalable implementation since the underlying PDE solver (and thus restoration from checkpoints) is highly tuned and fast. We develop a novel sparse-dense autoencoder architecture to minimize the FLOPs required to perform compression and decompression, showing that excellent compression results can be obtained with high levels of sparsity in the autoencoder architecture. Lastly, we present a data generation, normalization, and training scheme, showing that even the “offline” cost of training is small relative to the cost of solving the inverse problem. This work was scaled up to 256 nodes of Frontera. Area C Mathematical modeling and applications: We apply our proposed methods to two model applications which are well-suited to exploring each method’s relative advantages and disadvantages. First, we consider a 2D Poisson equation with sparse measurements. While applicable in a wide variety of fields, we consider Poisson’s equation in the context of steady-state heat conduction. Though the forward problem is linear, the inverse problem of inferring the underlying conductivity is nonlinear. The natural ill-posedness of this problem makes it an excellent test problem for new regularization methods and machine learning. Observing the temperature at only a select few locations makes the inverse problem even more ill-posed and of practical interest for testing the capabilities of inverse solvers. We also consider a large-scale seismic inverse problem, or full waveform inversion (FWI). Seismic waves can be modeled as acoustic waves propagating through the earth. The inverse problem we consider is to invert for the underlying acoustic wave speed given sparse measurements of the velocity field. We use this application to exhibit the scalability of our proposed methods to large-scale problems. Additionally, the time-dependence of FWI allows us to develop new methods for accelerating the solution of large-scale inverse problems.Item Cost effective strategies for problems in computational geophysics : seismic modeling and imaging(2020-02-03) Vamaraju, Janaki; Sen, Mrinal K.The first part of my thesis focuses on seismic modeling in fractured media. Several recent developments in finite elements such as usage of high degree polynomials to approximate the wavefield, diagonalization of mass matrices to be inverted through mass lumping techniques and usage of high order time-stepping schemes, have made these methods (along with their classical advantages) more attractive when compared to the finite difference methods (FDM). Discontinuous Galerkin finite element method (DGM) and spectral element method (SEM) have particularly attracted researchers in the field of numerical wave propagation. SEM uses continuous basis functions, which do not allow for discontinuities in the displacement field. Hence it can be used to simulate wave propagation only in non-fractured media. On the other hand, DGM allows for discontinuities in the displacement field to simulate fractures or faults but with a significant increase in computation cost and memory requirement. Here, I formulate and analyze two new, improved finite element techniques (FEM) for the numerical solution of elastic wave propagation in fractured and non-fractured media. Enriched Galerkin (EGM) and hybrid Galerkin (HGM) formulations are proposed for solving elastic wave propagation that has advantages similar to those of DGM but with a computational cost comparable to that of SEM. EGM uses the same bilinear form as DGM, and discontinuous piecewise constants or bilinear functions enrich the continuous Galerkin finite element spaces. EGM satisfies local equilibrium while reducing the degrees of freedom in DGM formulations. HGM employs DGM in areas containing fractures and SEM in regions without fractures. The coupling between the domains at the interfaces is satisfied through interface conditions. The degree of reduction in computation time depends primarily on the density of fractures in the medium. I apply these methods to model wave propagation in 2D/3D fractured media and validate their efficiency with numerical examples. Fractured reservoirs are more complicated due to the presence of fractures and pores. Biot’s fundamental theory on wave propagation in fluid-saturated porous media is still well accepted and forms the basis of this work. To examine the effects of fluid-filled cracks and fractures, I next propose to combine poroelasticity with the linear slip theory for simulating wave propagation in fractured porous media. This study provides an equivalent anisotropic medium model for the description of porous rock with fractures in the seismic frequency band. I solve Biot’s poroelastic wave equations using a velocity-stress staggered grid finite difference algorithm. Through numerical examples, I show that fractures and pores strongly influence wave propagation, induce anisotropy, and poroelastic behavior in wavefields. I also validate the presence of two compressional waves as predicted by Biot’s theory along with the converted waves due to faults. Compared to elastic methods, this approach provides a considerably concise and more accurate model for fractured reservoirs. The second part of my thesis centers on developing cost-effective solutions for seismic migrations and anisotropic moveout corrections. Least-squares migration (LSM) is a linearized inversion problem that iteratively minimizes a misfit functional as a function of the model perturbation. The success of the inversion largely depends on our ability to handle large systems of equations given the massive computation costs. I propose a suite of unsupervised machine learning (ML) approaches that leverage the existing physics-based models and machine learning optimizers to achieve more accurate and cheaper solutions. First, I use a special kind of unsupervised recurrent neural network and its variant, Hopfield neural networks, and the Boltzmann machine, to solve the problems of Kirchhoff and post-stack reverse time migrations. Physics-based forward models can be used to derive the weights and biases of the neural network. The optimal configuration of the neural network after training corresponds to the minimum energy of the network and thus gives the reflectivity solution of the migration problem. I next implement a fast image-domain target-oriented least-squares reverse time migration (LSRTM) workflow using a conjugate Hopfield network. The method computes a low-cost target-oriented Hessian matrix using plane-wave Green’s functions. I recover a more accurate image in the presence of a truncated Hessian matrix. I further implement pre-stack LSRTM in a deep learning framework and adopt various machine learning optimizers to achieve more accurate and cheaper solutions than conventional methods. Using a time-domain formulation, I show that mini-batch gradients can reduce the computation cost by using a subset of total shots for each iteration. Mini-batches not only reduce source cross-talk but are also less memory intensive. Combining mini-batch gradients with Adam optimizer and Huber loss function can improve the efficiency of pre-stack LSRTM. I demonstrate high accuracy predictions on complex synthetic models that can generate noisy data. Finally, I develop a Hough transform neural network based technique for normal moveout correction in vertically transverse isotropic (VTI) media. This technique offers advantages when compared to the time and computational costs required in a conventional anisotropic normal moveout correction. Using a Hough transform based neural network, I simultaneously fit all the non-hyperbolic reflection moveout curves using intermediate to long offsets. I apply the network to synthetic VTI datasets and demonstrate the practical feasibility of anisotropic moveout correction that is independent of travel-time picking and velocity analysisItem Data-scalable Hessian preconditioning for distributed parameter PDE-constrained inverse problems(2019-06-21) Alger, Nicholas Vieau; Ghattas, Omar N.; Bui-Thanh, Tan; Biros, George; Dhillon, Inderjit; Fomel, Sergey; Oden, John THessian preconditioners are the key to efficient numerical solution of large-scale distributed parameter PDE-constrained inverse problems with highly informative data. Such inverse problems arise in many applications, yet solving them remains computationally costly. With existing methods, the computational cost depends on spectral properties of the Hessian which worsen as more informative data are used to reconstruct the unknown parameter field. The best case scenario from a scientific standpoint (lots of high-quality data) is therefore the worst case scenario from a computational standpoint (large computational cost). In this dissertation, we argue that the best way to overcome this predicament is to build data-scalable Hessian/KKT preconditioners---preconditioners that perform well even if the data are highly informative about the parameter. We present a novel data-scalable KKT preconditioner for a diffusion inverse problem, a novel data-scalable Hessian preconditioner for an advection inverse problem, and a novel data-scalable domain decomposition preconditioner for an auxiliary operator that arises in connection with KKT preconditioning for a wave inverse problem. Our novel preconditioners outperform existing preconditioners in all three cases: they are robust to large numbers of observations in the diffusion inverse problem, large Peclet numbers in the advection inverse problem, and high wave frequencies in the wave inverse problem.Item Feature modeling and tomographic reconstruction of electron microscopy images(2012-05) Gopinath, Ajay, 1980-; Bovik, Alan C. (Alan Conrad), 1958-; Ress, David Bruce; Xu, Guoliang; Pearce, John; Ghosh, JoydeepThis work introduces a combination of image processing and analysis methods that perform feature extraction, shape analysis and tomographic reconstruction of Electron Microscopy images. These have been implemented on images of the AIDS virus interacting with neutralizing molecules. The AIDS virus spike is the primary target of drug design as it is directly involved in infecting host cells. First, a fully automated technique is introduced that can extract sub-volumes of the AIDS virus spike and be used to build a statistical model without the need for any user supervision. Such an automatic feature extraction method can significantly enhance the overall process of shape analysis of the AIDS virus spike imaged through the electron microscope. Accurate models of the virus spike will help in the development of better drug design strategies. Secondly, a tomographic reconstruction method implemented using a shape based regularization technique is introduced. Spatial models of known features in the structure being reconstructed are integrated into the reconstruction process as regularizers. This regularization scheme is driven locally through shape information obtained from segmentation and compared with a known spatial model. This method shows reduced blurring, and an improvement in the resolution of the reconstructed volume was also measured. It performs better than popular current techniques and can be extended to other tomographic modalities. Improved Electron Tomography reconstructions will provide better structure elucidation and improved feature visualization, which can aid in solving key biological issues.Item Hessian-based response surface approximations for uncertainty quantification in large-scale statistical inverse problems, with applications to groundwater flow(2013-08) Flath, Hannah Pearl; Ghattas, Omar N.Subsurface flow phenomena characterize many important societal issues in energy and the environment. A key feature of these problems is that subsurface properties are uncertain, due to the sparsity of direct observations of the subsurface. The Bayesian formulation of this inverse problem provides a systematic framework for inferring uncertainty in the properties given uncertainties in the data, the forward model, and prior knowledge of the properties. We address the problem: given noisy measurements of the head, the pdf describing the noise, prior information in the form of a pdf of the hydraulic conductivity, and a groundwater flow model relating the head to the hydraulic conductivity, find the posterior probability density function (pdf) of the parameters describing the hydraulic conductivity field. Unfortunately, conventional sampling of this pdf to compute statistical moments is intractable for problems governed by large-scale forward models and high-dimensional parameter spaces. We construct a Gaussian process surrogate of the posterior pdf based on Bayesian interpolation between a set of "training" points. We employ a greedy algorithm to find the training points by solving a sequence of optimization problems where each new training point is placed at the maximizer of the error in the approximation. Scalable Newton optimization methods solve this "optimal" training point problem. We tailor the Gaussian process surrogate to the curvature of the underlying posterior pdf according to the Hessian of the log posterior at a subset of training points, made computationally tractable by a low-rank approximation of the data misfit Hessian. A Gaussian mixture approximation of the posterior is extracted from the Gaussian process surrogate, and used as a proposal in a Markov chain Monte Carlo method for sampling both the surrogate as well as the true posterior. The Gaussian process surrogate is used as a first stage approximation in a two-stage delayed acceptance MCMC method. We provide evidence for the viability of the low-rank approximation of the Hessian through numerical experiments on a large scale atmospheric contaminant transport problem and analysis of an infinite dimensional model problem. We provide similar results for our groundwater problem. We then present results from the proposed MCMC algorithms.Item High-performance inversion algorithms for brain tumor growth models in personalized medicine(2021-06-28) Subramanian, Shashank; Biros, George; Ward, Rachel; Ghattas, Omar; Yankeelov, Thomas; Davatzikos, ChristosThis dissertation concerns the integration of biophysical macroscopic brain tumor growth models with clinical imaging data from Magnetic Resonance Imaging (MRI) scans. We focus on gliomas (and their aggressive manifestation, glioblastoma multiforme (GBM)), the most common malignant primary brain tumor diagnosed in adults. GBM is a deadly disease characterized by its highly invasive nature into surrounding healthy tissue and is uniformly fatal with a median survival of less than 15 months. The integration of mathematical models with clinical imaging data holds the enormous promise of robust, minimal, and explainable models that quantify cancer growth and connect cell-scale phenomena to organ-scale, personalized, clinical observables. These models can help facilitate diagnosis (e.g., tumor grading and patient stratification), prognosis (e.g., predicting recurrence and survival), and treatment (e.g., preoperative planning and radiotherapy). Additionally, they can advance our understanding of the disease by using imaging data to test model-driven hypotheses on disease progression and treatment. Towards this end, we develop mathematical models that capture the heterogeneous phenomenological features of GBMs as observed from patient imaging scans and provide a framework to calibrate these models for unknown patient-specific biomarkers. There are three key challenges to developing and integrating biophysical brain tumor growth models with imaging data: (i) tumor growth is a complex dynamical system with several interacting biophysical processes that are challenging to capture mathematically, (ii) the inverse problem of calibrating these growth models is notoriously difficult due to the lack of temporal resolution in imaging data, leading to severe mathematical ill-posedness, and finally, (iii) there is a prohibitive computational cost associated with the 4D (space-time) simulation and calibration of tumor growth models. We introduce novel innovations to systematically address these challenges: (i) we develop minimal phenomenological models that integrate the complex heterogeneous structure of GBM with its infiltrative and biomechanical effects on brain tissue, (ii) we introduce and analyze a new inverse problem formulation with biophysically-inspired regularization methods and ensembled fast inversion algorithms to reliably calibrate our mathematical models using imaging data, and finally, (iii) we integrate our numerical methods and algorithms within a high-performance software library that exploits heterogeneous compute substrates (distributed memory and GPU acceleration), to enable realistic solution times. Our framework provides an entirely new capability to analyze complex tumors (possibly multifocal) from a single-time-snapshot MRI scan in a fully-automatic manner. Finally, we conduct a comprehensive retrospective study using a large number of clinical images to demonstrate the utility of our calibrated tumor growth models in important clinical tasks such as medical image segmentation, patient stratification, and overall survival prediction. We envision this research to be an important stepping stone towards the precise characterization of cancer and personalization of cancer growth models for clinical decision-making supportItem Hybrid inverse problems in molecular imaging(2017-05) Zhang, Rongting; Ren, Kui; Tsai, Yen-Hsi; Gonzalez, Oscar; Ghattas, Omar; Biros, GeorgeHybrid inverse problems refer to inverse problems where two partial differential equations of different types are coupled. Such problems appear in modern hybrid imaging modalities where we attempt to combine two different imaging modalities together to achieve imaging abilities that could not be achieved by either of the two modalities alone. This dissertation is devoted to the study of hybrid inverse problems in two molecular imaging modalities that are based on photoacoustics: the coupling of ultrasound imaging with optical tomography through photoacoustic effect to achieve high-resolution and high-contrast imaging of molecular functions of biological tissues. The first inverse problem we study here is related to quantitative two-photon photoacoustic tomography (TP-PAT). The mathematical problem here is to reconstruct coefficients in a semilinear diffusion equation from interior information on the solution of the PDE. We derive some uniqueness, non-uniqueness and stability results on the reconstruction problem under various circumstances. Moreover, we propose a few image reconstruction algorithms and perform numerical simulations using these algorithms to complement our theoretical analysis. The second inverse problem we study here arise in quantitative fluorescence photoacoustic tomography (fPAT). The objective is to reconstruct optical coefficients in a system of radiative transport equations from interior data on the solution to the system. We study the question of uniqueness and stability of reconstructions and develop some direct and iterative image reconstruction methods for the reconstruction of the quantum efficiency and the fluorescent absorption coefficient. We also perform numerical studies on the inverse problems for media with different absorption and scattering properties.Item Inverse modeling and characterization of an experimental testbed to advance fire scene reconstruction(2017-12) Kurzawski, Andrew Joseph; Ezekoye, Ofodike A.; Hall, Matthew J; Moser, Robert D; Bui-Thanh, Tan; Fuss, Stephen PFire investigators examine fire scenes and collect data to form hypotheses on the origin and cause of the fire. The fire scene contains a wealth of data in the form of damage to objects in the areas affected by the fire. A computational framework with the ability to make inferences on the origin of a fire based on the data would be beneficial to the fire investigation process. Such a framework would require models of the fires, quantifiable damage metrics, and a method for making inferences on the fire origin. This work seeks to address two of the three points by using Bayesian inversion for determining the most likely origin of a fire in a compartment and constructing an algorithm for determining the heat-release rate from a burning object that can be supplied to a computational fire model. To accomplish these tasks, an experimental burn compartment was designed and a series of tests were run with controlled heat-release rates. Data collected in each experiment included temperatures, heat fluxes, and gas velocities. Modeling of the controlled heat-release rate experiments was carried out in the Consolidated Model of Fire and Smoke Transport (CFAST) and Fire Dynamics Simulator (FDS). Both the Bayesian inversion framework and heat-release rate reconstruction algorithm rely on computational fire models to determine the fire location and heat-release rate respectively. Following the modeling efforts, the Bayesian inversion framework was tested on synthetic data generated by FDS using the geometry of the experimental structure. Time-integrated total energy per unit area data were used as a placeholder for damage models of objects found in a fire scene. The heat-release rate reconstruction algorithm was used to determine the heat-release rates of the experiments using transient heat flux data collected at an array of sensors.Item Multi-material nanoindentation simulations of viral capsids(2010-05) Subramanian, Bharadwaj; Bajaj, Chandrajit; Oden, Tinsley J.An understanding of the mechanical properties of viral capsids (protein assemblies forming shell containers) has become necessary as their perceived use as nano-materials for targeted drug delivery. In this thesis, a heterogeneous, spatially detailed model of the viral capsid is considered. This model takes into account the increased degrees of freedom between the capsomers (capsid sub-structures) and the interactions between them to better reflect their deformation properties. A spatially realistic finite element multi-domain decomposition of viral capsid shells is also generated from atomistic PDB (Protein Data Bank) information, and non-linear continuum elastic simulations are performed. These results are compared to homogeneous shell simulation re- sults to bring out the importance of non-homogenous material properties in determining the deformation of the capsid. Finally, multiscale methods in structural analysis are reviewed to study their potential application to the study of nanoindentation of viral capsids.Item Noninvasive material discrimination using spectral radiography and an inverse problem approach(2014-12) Gilbert, Andrew James; Deinert, Mark; McDonald, Benjamin; Biegalski, Steven; Ghattas, Omar; Schneider, ErichNoninvasive material discrimination of an arbitrary object is applicable to a wide range of fields, including medical scans, security inspections, nuclear safeguards, and nuclear material accountancy. In this work, we present an algorithmic framework to accurately determine material compositions from multi-spectral X-ray and neutron radiography. The algorithm uses an inverse problem approach and regularization, which amounts to adding information to the problem; stabilizing the solution so that accurate material estimations can be made from a problem that would otherwise be intractable. First, we show the utility of the algorithm with simulated inspections of small objects, such as baggage, for small quantities of high-atomic-numbered materials (i.e. plutonium). The algorithm shows excellent sensitivity to shielded plutonium in a scan using an X-ray detector that can bin X-rays by energy. We present here a method to adaptively weight the regularization term, obtaining an optimal solution with minimal user input. Second, we explore material discrimination with high-energy, multiple-energy X-ray. Experimental X-ray data is obtained here and accurate discrimination of steel among lower-atomic-numbered materials is shown. Accurate modeling of the inspection system physics is found to be essential for accurate material estimations with this data, especially the detector response and the scattered flux on the image plane. Third, we explore the use of neutron radiography as complementary to X-ray radiography for the inspection of nuclear material storage containers. Utility of this extra data is shown, especially in detecting a hypothetical attempt to divert material. We present a method to choose inspection system design parameters (i.e. source energy and detector thickness) a priori by using the Cramér-Rao lower bound as a measure of resulting material estimation accuracy. Finally, we present methodology to use tomography data obtained with an energy discriminating detector for direct reconstruction of material attenuation coefficients.Item Numerical algorithms for inverse problems in acoustics and optics(2014-05) Ding, Tian, 1986-; Ren, Kui; Engquist, Bjorn; Gamba, Irene Martínez; Ghattas, Omar; Gonzalez, Oscar; Wheeler, Mary FanettThe objective of this dissertation is to develop computational algorithms for solving inverse coefficient problems for partial differential equations that appear in two medical imaging modalities. The aim of these inverse problems is to reconstruct optical properties of scattering media, such as biological tissues, from measured data collected on the surface of the media. In the first part of the dissertation, we study an inverse boundary value problems for the radiative transport equation. This inverse problem plays important roles in optics-based medical imaging techniques such as diffuse optical tomography and fluorescence optical tomography. We propose a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknowns, both the unknown coefficient and the corresponding transport solutions (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analytically recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm. In the second part of the dissertation, we develop a three-dimensional reconstruction algorithm for photoacoustic tomography in isotropic elastic media. There have been extensive study of photoacoustic tomography in recent years. However, all existing numerical reconstructions are developed for acoustic media in which case the model for wave propagation is the acoustic wave equation. We develop here a two-step reconstruction algorithm to reconstruct quantitatively optical properties, mainly the absorption coefficient and the Gr\"uneisen coefficient using measured elastic wave data. The algorithm consists of an inverse source step where we reconstruct the source function in the elastic wave equation from boundary data and an inverse coefficient step where we reconstruct the coefficients of the diffusion equation using the result of the previous step as interior data. We present some numerical reconstruction results with synthetic data to demonstrate the performance of our algorithm. This is, to the best of our knowledge, the first reconstruction algorithm developed for quantitative photoacoustic imaging in elastic media. Despite the fact that we separate the dissertation into these two different parts to make each part more focused, the algorithms we developed in the two parts are closely related. In fact, if we replace the diffusion model for light propagation in photoacoustic imaging by the radiative transport model, which is often done in the literature, the algorithm we developed in the first part can be integrated into the algorithm in the second part after some minor modifications.Item Numerical methods for multiscale inverse problems(2014-05) Frederick, Christina A; Engquist, Björn, 1945-This dissertation focuses on inverse problems for partial differential equations with multiscale coefficients in which the goal is to determine the coefficients in the equation using solution data. Such problems pose a huge computational challenge, in particular when the coefficients are of multiscale form. When faced with balancing computational cost with accuracy, most approaches only deal with models of large scale behavior and, for example, account for microscopic processes by using effective or empirical equations of state on the continuum scale to simplify computations. Obtaining these models often results in the loss of the desired fine scale details. In this thesis we introduce ways to overcome this issue using a multiscale approach. The first part of the thesis establishes the close relation between computational grids in multiscale modeling and sampling strategies developed in information theory. The theory developed is based on the mathematical analysis of multiscale functions of the type that are studied in averaging and homogenization theory and in multiscale modeling. Typical examples are two-scale functions f (x, x/[epsilon]), (0 < [epsilon] ≪ 1) that are periodic in the second variable. We prove that under certain band limiting conditions these multiscale functions can be uniquely and stably recovered from nonuniform samples of optimal rate. In the second part, we present a new multiscale approach for inverse homogenization problems. We prove that in certain cases where the specific form of the multiscale coefficients is known a priori, imposing an additional constraint of a microscale parametrization results in a well-posed inverse problem. The mathematical analysis is based on homogenization theory for partial differential equations and classical theory of inverse problems. The numerical analysis involves the design of multiscale methods, such as the heterogeneous multiscale method (HMM). The use of HMM solvers for the forward model has unveiled theoretical and numerical results for microscale parameter recovery, including applications to inverse problems arising in exploration seismology and medical imaging.Item Quantitative PAT with unknown ultrasound speed : uncertainty characterization and reconstruction methods(2015-05) Vallélian, Sarah Catherine; Ren, Kui; Ghattas, Omar; Müller, Peter; Tsai, Yen-Hsi; Ward, RachelQuantitative photoacoustic tomography (QPAT) is a hybrid medical imaging modality that combines high-resolution ultrasound tomography with high-contrast optical tomography. The objective of QPAT is to recover certain optical properties of heterogeneous media from measured ultrasound signals, generated by the photoacoustic effect, on the surfaces of the media. Mathematically, QPAT is an inverse problem where we intend to reconstruct physical parameters in a set of partial differential equations from partial knowledge of the solution of the equations. A rather complete mathematical theory for the QPAT inverse problem has been developed in the literature for the case where the speed of ultrasound inside the underlying medium is known. In practice, however, the ultrasound speed is usually not exactly known for the medium to be imaged. Using an approximated ultrasound speed in the reconstructions often yields images which contain severe artifacts. There is little study as yet to systematically investigate this issue of unknown ultrasound speed in QPAT reconstructions. The objective of this dissertation is exactly to investigate this important issue of QPAT with unknown ultrasound speed. The first part of this dissertation addresses the question of how an incorrect ultrasound speed affects the quality of the reconstructed images in QPAT. We prove stability estimates in certain settings which bound the error in the reconstructions by the uncertainty in the ultrasound speed. We also study the problem numerically by adopting a statistical framework and applying tools in uncertainty quantification to systematically characterize artifacts arising from the parameter mismatch. In the second part of this dissertation, we propose an alternative reconstruction algorithm for QPAT which does not assume knowledge of the ultrasound speed map a priori, but rather reconstructs it alongside the original optical parameters of interest using data from multiple illumination sources. We explain the advantage of this simultaneous reconstruction approach compared to the usual two-step approach to QPAT and demonstrate numerically the feasibility of our algorithm.Item Recovering the payoff structure of a utility maximizing agent(2016-05) Goswami, Pulak; Žitković, Gordan; Sirbu, Mihai; Pavlovic, Natasa; Larsen, KasperAny agent with access to information that is not available to the market at large is considered an ‘insider’. It is possible to interpret the effect of this private information as change in the insider’s probability measure. In the case of exponential utility, logarithm of the Radon-Nikodym derivative for the change in measure will appear as a random endowment in the objective the insider would maximize with respect to the original measure. The goal of this paper is to find conditions under which it is possible to recover the structure of this random endowment given only a single trajectory of his/her wealth. To do this, it is assumed that the random endowment is a function of the terminal value of the state variable and that the market is complete.Item Total variation regularized solution of the satellite gravity inverse problem(2024-05) Jacob, Geethu Lisba ; Bettadpur, Srinivas Viswanath, 1963-; Jingyi Ann Chen; Brandon Jones; Omar Ghattas; Ryan RussellThe Gravity Recovery and Climate Experiment (GRACE) and its successor GRACE-Follow On (GRACE-FO) are satellite-satellite tracking missions which enabled precise measurement of the time-variable gravity and mass change on the Earth and have been critical in improving our understanding of Earth system processes in hydrology, oceanography and the cryosphere. The estimation of time-variable gravity from spaceborne measurements, such as GRACE(-FO), constitutes an ill-posed inverse problem, which results in large errors in the unconstrained solutions. The common approach to stabilizing the inversion has been L₂-Tikhonov regularization using heuristic spatially varying constraint matrices. The design of the constraint matrix requires substantial expertise in interpretation of the signal content in geophysical model outputs or the GRACE(-FO) data. In this dissertation, Total Variation (TV) regularization is demonstrated to be an effective alternative penalty for the GRACE(-FO) gravity estimation problem that does not require the use of an expertly designed constraint matrix. The TV penalty preferentially suppresses the poorly observed high-frequency modes in the solution and promotes edge-preservation and signal localization in the solution. This facilitates the recovery of the mass change signal with limited leakage error, without the need for explicit spatial constraints. Three approaches to regularized estimation of mass change are explored – post-processing of unconstrained estimates, inversion using a spherical harmonic intermediary, and inversion from a direct mapping between mass anomalies and observations. The post-processing framework demonstrates the efficacy of TV regularization in suppressing high-frequency noise and recovering the spatial variability in the mass change signal with limited leakage. TV-regularized estimates of mass anomalies from range rate measurements are derived using a spherical harmonic intermediary. Iterative refinement is performed to reduce signal amplitude loss due to regularization bias, particularly in regions with large trends. The timeseries of mass change estimates generated using TV regularization are consistent with the L₂-Tikhonov mascons presently used by the GRACE(-FO) community in terms of spectral characteristics, long-wavelength modes, spatial distribution of the annual and trend signals, and the temporal evolution of aggregate total mass over basins of interest. TV-regularized estimation using direct mapping between the mass anomalies and range acceleration is also discussed, including results from a simulation performed to assess the behavior of the regularization scheme without being constrained by the limitations of flight data quality. Due to the absence of explicit spatial constraints, the TV regularization scheme exposes the inherent spatial variability in the data, without any preconceptions. While the validation presented here is based on agreement with the L₂-Tikhonov mascons, regional differences between the two solutions could be informative and offers a new pathway for investigation of mass change within the GRACE(-FO) community.Item Towards the predictive modeling of ductile failure(2015-12) Gross, Andrew Jeffrey; Ravi-Chandar, K.; Kovar, Desiderio; Landis, Chad; Liechti, Kenneth; Kyriakides, SteliosThe ability to predict ductile failure is considered by an experimental examination of the failure process, validation exercises to assess predictive ability, and development of a coupled experimental-numerical strategy to enhance model development. In situ loading of a polycrystalline metal inside a scanning electron microscope is performed on Al 6061-T6 that reveals matrix-dominated response for both deformation and failure. Highly localized deformation fields are found to exist within each grain as slip accumulates preferentially on a small fraction of crystallographic planes. No evidence of damage or material softening is found, implying that a strain-to-failure model is adequate for modeling fracture in this and similar material. This modeling insight is validated through blind predictive simulations performed in response to the 2012 and 2014 Sandia Fracture Challenges. Constitutive and failure models are calibrated and then embedded in highly refined finite element simulations to perform blind predictions of the failure behavior of the challenge geometries. Comparison of prediction to experiment shows that a well-calibrated model that captures the essential elastic-plastic constitutive behavior is necessary to capture confidently the response for structures with complex stress states, and is a prerequisite for a precise prediction of material failure. The validation exercises exposed the need to calibrate sophisticated plasticity models without a large experimental effort. To answer this need, a coupled experimental and numerical method is developed for characterizing the elastic-plastic constitutive properties of ductile materials using local deformation field information to enrich calibration data. The method is applied to a tensile test specimen and the material’s constitutive model, whose parameters are unknown a priori, is determined through an optimization process that compares these experimental measurements with iterative finite element simulations. The final parameters produce a simulation that tracks the local experimental displacement field to within a couple percent of error. Simultaneously, the percent error in the simulation for the load carried by the specimen throughout the test is less than one percent. The enriched calibration data is found to be sufficient to constrain model parameters describing anisotropy that could not be constrained by the global data alone.