TACCSTER 2020 Proceedings

Permanent URI for this collection


Recent Submissions

Now showing 1 - 10 of 21
  • Item
    A High-Performance Inversion Framework for Brain Tumor Growth Models in Personalized Medicine
    (2020) Subramanian, Shashank; Scheufele, Klaudius; Himthani, Naveen; Biros, George
    The precise characterization of aggressive brain tumors remains a challenging problem due to their highly heterogeneous radiographic and molecular presentation. The integration of mathematical models with clini- cal imaging data holds an enormous promise of developing robust predictive and explainable models that quantify cancer growth with the potential to as- sist in diagnosis and treatment. In general, such models are parameterized by many unknown parameters and their estimation can be formally posed as an inverse problem. However, this calibration problem is a formidable task for aggressive brain tumors due to the absence of longitudinal data, resulting in a strongly ill-posed inverse problem. This is further exacerbated by the inherent non-linearity in tumor growth models. Overcoming these difficulties involves the introduction of sophisticated regularization strategies along with compu- tationally efficient algorithms and software. Towards this end, we introduce a fully-automatic inversion framework which provides an entirely new capa- bility to analyze complex brain tumors from a single pretreatment magnetic resonance imaging (MRI) scan. Our framework employs fast algorithms and optimized implementations which exploit distributed-memory parallelism and GPU acceleration to enable reasonable solution times – an important factor for clinical applications. We validate our solver on clinical data and demonstrate its utility in characterizing important biophysics of brain cancer along with its ability to complement other radiographic information in downstream machine learning tasks.
  • Item
    Negative Gruneisen Parameters at High Pressure in FeTi from Density Functional Theory
    (2020) Munoz, Jorge; Khamala, Bethuel
    Density functional theory (DFT) calculations are a pillar of modern materials physics and quantum chemistry research and the execution of DFT codes represent a significant fraction of the overall utilization of Texas Advanced Computing Center (TACC) resources. We present results of DFT and DFT-based calculations on FeTi, a brittle intermetallic material that crystallizes in the bcc-based CsCl structure and is stable until it melts at 1600 K. We investigated its electronic band structure and phonon dispersion relations using DFT at different specific volumes and uncovered a volume range in which the majority of the phonon modes decrease in energy or remain unchanged with decreasing volume. This behavior is usually observed in invar materials, but unlike them, FeTi is nonmagnetic and there is negligible change in the Fermi surface with pressure. The behavior occurs more generally in materials that show negative thermal expansion, but unlike most of those materials, the crystal structure of FeTi is not particularly open and it is stable at high pressure. In this talk we will show ancillary measurements of the phonon density-of-states curves performed via nuclear-resonant inelastic x-ray scattering in a diamond-anvil cell (DAC) at pressures up to 55 GPa and x-ray diffraction also in a DAC at pressures up to 25 GPa that confirm that the Gruneisen parameters are indeed negative in the predicted specific volumes. We also show an analysis of the calculated force constants, charge densities, and band structures that preliminarily point towards orbital hybridization as the origin of the observed negative Gruneisen parameters.
  • Item
    Stochastic SIR-based Examination of the Policy Effects on the COVID-19 Spread in the U.S. States
    (2020) Song, Mina; Belle, Macy; Mendlovitz, Aaron; Han, David
    Since the global outbreak of the novel COVID-19, many research groups have studied the epidemiology of the virus for short-term forecasts and to formulate the effective disease containment and mitigation strategies. The major challenge lies in the proper assessment of epidemiological parameters over time and of how they are modulated by the effect of any publicly announced interventions. Here we attempt to examine and quantify the effects of various (legal) policies/orders in place to mandate social distancing and to flatten the curve in each of the U.S. states. Through Bayesian inference on the stochastic SIR models of the virus spread, the effectiveness of each policy on reducing the magnitude of the growth rate of new infections is investigated statistically. This will inform the public and policymakers, and help them understand the most effective actions to fight against the current and future pandemics. It will aid the policy-makers to respond more rapidly (select, tighten, and/or loosen appropriate measures) to stop/mitigate the pandemic early on.
  • Item
    Optimal Dynamic Treatment Regime by Reinfocement Learning in Clinical Medicine
    (2020) Song, Mina; Han, David
    Precision medicine allows personalized treatment regime for patients with distinct clinical history and characteristics. Dynamic treatment regime implements a reinforcement learning algorithm to produce the optimal personalized treatment regime in clinical medicine. The reinforcement learning method is applicable when an agent takes action in response to the changing environment over time. Q-learning is one of the popular methods to develop the optimal dynamic treatment regime by fitting linear outcome models in a recursive fashion. Despite its ease of implementation and interpretation for domain experts, Q-learning has a certain limitation due to the risk of misspecification of the linear outcome model. Recently, more robust algorithms to the model misspecification have been developed. For example, the inverse probability weighted estimator overcomes the aforementioned problem by using a nonparametric model with different weights assigned to the observed outcomes for estimating the mean outcome. On the other hand, the augmented inverse probability weighted estimator combines information from both the propensity model and the mean outcome model. The current statistical methods for producing the optimal dynamic treatment regime however allow only a binary action space. In clinical practice, some combinations of treatment regime are required, giving rise to a multi- dimensional action space. This study develops and demonstrates a practical way to accommodate a multi-level action space, utilizing currently available computational methods for the practice of precision medicine.
  • Item
    Statistical Perspectives in Teaching Deep Learning from Fundamentals to Applications
    (2020) Kim, Nathan; Han, David
    The use of Artificial Intelligence, machine learning and deep learning have gained a lot of attention and become increasingly popular in many areas of application. Historically machine learning and theory had strong connections to statistics; however, the current deep learning context is mostly in computer science perspectives and lacks statistical perspectives. In this work, we address this research gap and discuss how to teach deep learning to the next generation of statisticians. We first describe some backgrounds and how to get motivated. We discuss different terminologies in computer science and statistics, and how deep learning procedures work without getting into mathematics. In response to a question regarding what to teach, we address organizing deep learning contents and focus on the statistician’s view; form basic statistical understandings of the neural networks to the latest hot topics on uncertainty quantifications for prediction of deep learning, which has been studied in the Bayesian frameworks. Further, we discuss how to choose computational environments and help develop programming skills for the students. We also discuss how to develop homework incorporating the idea of experimental design. Finally, we discuss how to expose students to the domain knowledge and help to build multi- discipline collaborations.
  • Item
    Using Ancestral Reconstruction of Chromosome Expression States (ARChES) to Understand the Evolution of Dosage Compensation
    (2020) Ramesh, Balan; Demuth, Jeff
    Ohno (1967) originally proposed that the sex difference in X-linked gene dose caused by the decay of Y-linked genes may impose a “peril of hemizygosity” and that regulatory mechanisms must compensate to make X=XX=AA at the level of expression. Recent evidence suggests that Ohno’s paradigm is not universal, but our understanding remains unclear because estimating the ancestral expression of X-linked genes is difficult or impossible in many systems. Many studies assess dosage compensation (DC) by comparing X: Autosome expression ratios, thereby implicitly assuming that current average autosomal gene expression (AA) is a good proxy for the average ancestral expression of X-linked genes. A more appropriate test would be whether X=XX=Ancestral expression, where “Ancestral” is the inferred expression level of each X- linked gene before becoming X-linked. The few studies that have attempted to compare X (or Z) linked gene expression to corresponding ancestral levels have relied on distantly related taxa that include changes in chromosome number and sex-determination system. Here, we study the evolution of dosage compensation by comparing expression of neo-X chromosome genes in Tribolium confusum to their inferred ancestral, autosomal expression state. The ancestral expression is estimated by analyzing RNA-Seq data across a time-calibrated phylogeny that includes four additional closely related species that all share an ancestral karyotype where the neo-X genes of T. confusum remain autosomal. We find that the neo-X in T. confusum is dosage balanced (X=XX) and dosage compensated (X=Ancestral), suggesting a chromosome-wide dosage compensation mechanism as envisioned by Ohno. Further, we observe that DC in T. castaneum, which was previously contentious, is fully balanced and compensated (X=XX=Ancestral). The computational approach to analyzing DC evolution via Ancestral Reconstruction of Chromosome Expression States (ARChES) was developed using TACC and is publicly available. ARChES workflow is computationally scalable and can be expanded to analyze DC in any species.
  • Item
    Quantum Computation, Quantum Algorithms & Implications on Data Science
    (2020) Kim, Nathan; Garcia, Jeremy; Han, David
    Quantum computing is a new revolutionary computing paradigm, first theorized in 1981. It is based on quantum physics and quantum mechanics, which are fundamentally stochastic in nature with inherent randomness and uncertainty. The power of quantum computing relies on three properties of a quantum bit: superposition, entanglement, and interference. Quantum algorithms are described by the quantum circuits, and they are expected to solve decision problems, functional problems, oracular problems, sampling tasks and optimization problems so much faster than the classical silicon-based computers. They are expected to have a tremendous impact on the current Big Data technology, machine learning and artificial intelligence. Despite the theoretical and physical advancements, there are still several technological barriers for successful applications of quantum computation. In this work, we review the current state of quantum computation and quantum algorithms, and discuss their implications on the practice of Data Science in the near future. There is no doubt that quantum computing will accelerate the process of scientific discoveries and industrial advancements, having a transformative impact on our society.
  • Item
    Andromeda: A Few-body Plane Wave Calculator
    (2020) Jerke, Jonathan; Wu, Jackson; Poirier, Bill; Karwowski, Jacek
    At TACCSTER last year, a novel method of ours to solve the 3-body lithium problem was presented. Without finishing, the computation plateaued at -7.3 (of -7.4) Hartree on an L = 67 ^ 9 grid running on a single TACC Lonestar5 node for three months. We have now released a new version of the Andromeda code capable of embarrassingly parallel operations. This improvement followed from a significant speedup of half the process, namely the free and exact creation of the Hamiltonian quantum operators and their operation in Sums of Products form. Even though this does not speed up the vector decomposition process, which is still the rate-limiting step, we can now distribute processing per term-state combination across numerous computational resources to overcome this problem. In particular, any 2-body interaction quantum operator is now a summation of processes defined by separate 1-body matrices for the 2-body diagonal, 1-body diagonal, and off-diagonal aspects of the quantum operation. Thus, every core in a parallel process can individually initialize the Coulombic quantum operator, which allows embarrassingly parallel operations across several state vectors. The current release has integrated the TACC/launcher as a vehicle to handle parallel operations. Digitize your wave function with the most local representation of the plane-wave basis. Tackle strongly correlated problems with a spatial component separated, but fully multi-body, Sums-of-Products representation. Compute 3-body quantum physics with a powerful scripting interface. Discover something.
  • Item
    Synthesizing Dense and Colored 3D Point Clouds for Training Deep Neural Networks
    (2020) Arshad, Mohammad Samiul; Beksi, William
    3D point clouds are a compact homogeneous representation that have the ability to cap- ture intricate details of the environment. They are useful for a wide variety of applications. For example, point clouds can be sampled from the mesh of manually designed objects to use as synthetic data for training deep learning networks. However, the geometry and tex- ture of these point clouds is bounded by the resolution of the modeled objects. To facilitate learning with synthetic 3D point clouds, we present a novel conditional generative adver- sarial network that creates dense point clouds, with color, in an unsupervised manner. The difficulty of capturing intricate details at high resolutions is handled by a point transformer that progressively grows the network through the use of graph convolutions. Every training iteration evolves a point vector into a point cloud. Experimental results show that our net- work is capable of learning a 3D data distribution and produces colored point clouds with fine details at multiple resolutions.