Browsing by Subject "Discontinuous Galerkin method"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Advances towards a multi-dimensional discontinuous Galerkin method for modeling hurricane storm surge induced flooding in coastal watersheds(2016-08) Neupane, Prapti; Dawson, Clinton N.; Gamba, Irene M; Engquist, Bjorn; Bui-Thanh, Tan; Moser, Robert DCoastal areas are regions of high population density and urbanization. These areas are highly vulnerable to inundation and flooding not only because of hurricane storm surge, but also because of the torrential rainfall that often accompanies hurricanes. In order to accurately predict the extent of damage such an event might cause, any model that is used to simulate this process needs to couple rainfall with storm surge. The works that have tried to address this issue have mostly used a unidirectional coupling technique, where one of the following two approaches is taken. In the first approach, a hydrology model is used in the domain of interest and storm surge is incorporated in the domain as a boundary condition. In the second approach, a storm surge model is used in the domain of interest and rainfall is incorporated in the domain as a river inflow boundary condition. Neither of these approaches allows the rainwater and the surge water to interact bidirectionally. In order to improve on those efforts, in this dissertation, we develop a comprehensive framework for modeling flooding in coastal watersheds. We present an approach to decompose a watershed into multiple sub-domains depending on the dynamics of flow in the region. We use different simplifications of the shallow water equations on different sub-domains to gain computational efficiency without compromising on physical accuracy. The different sub-domains are coupled with each other through numerical fluxes in a discontinuous Galerkin framework. This technique allows for a tight coupling of storm surge with rainfall runoff, so that the flooding that occurs is truly influenced by the nonlinear interaction of these two processes. We present numerical tests to validate and verify the methods used for modeling flow in different sub-domains as well as the techniques used for coupling different sub-domains with each other.Item GPU-accelerated high-performance computing for architecture-aware wave simulation based on discontinuous Galerkin algorithms(2020-05-09) Hanindhito, Bagus; John, Lizy KurianFull-waveform inversion has been an essential method for oil and gas industries to approximate the properties of the Earth’s surface without the need to see them directly by digging, drilling, or tunneling, and thus lowering exploration costs. This method relies on the use of generated seismic waves and the acquisition of the reflected wave data. Since each type of rocks, sediments, and materials have different properties, the acquired data can be used to approximate the location of mineral and oil repository, for example. The first problem, which is the focus of this research, is called the forward problem, which aims to generate synthetic seismograms based on the given model. The second problem is the inverse problem, which tries to find the optimum model that can best describe the obtained data. Generally, the area of the seismic survey is massive and can easily generate a vast amount of data, which is used to find the best Earth model. Therefore, a considerable amount of computing power is required to help in solving these problems. Industrial-scale wave simulators typically use multiple CPUs to accelerate computation. As the size of the problem increases, the time needed to run the simulation will increase accordingly. In this thesis, we investigated the implementation of a CPU-based wave simulator to find available parallelism that can be extracted. We mapped the massive number of parallelism to GPU which has thousands of cores, and thus suitable for doing this job. We performed additional optimization of the basic code to improve the performance. We also developed a method to verify the functionality of our implementation against the original code. The GPU-accelerated version of the code is then compared to the original CPU code. We run the simulation for different levels of discretization both in consumer-class GPU and datacenter-class GPU. For the double-precision run, our benchmark results show the speed-up over 120x, 210x, and 330x in GeForce GTX 1080Ti, Tesla P100, and Tesla V100 GPU, respectively, compared to the dual Intel Xeon Platinum 8160 CPUs with a total of 48 cores