Task-based parallelism for hurricane storm surge modeling

Date

2020-07-30

Authors

Bremer, Maximilian Heimo Moritz

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Hurricanes are incredibly devastating events, constituting seven of the ten most costly U.S. natural disasters since 1980. The development of real-time forecasting models that accurately capture a storm's dynamics play an essential role in informing local officials' emergency management decisions. ADCIRC is one such model that is operationally active in the National Oceanic and Atmospheric Administration's Hurricane Surge On-Demand Forecast System. However, ADCIRC faces several limitations. It struggles solving highly advective flows and is not locally mass conservative. These aspects limit applicable flow regimes and can cause unphysical behavior. One proposed alternative which addresses these limitations is the discontinuous Galerkin (DG) finite element method. However, the DG method's high computational cost makes it unsuitable for real-time forecasting and has limited adoption among coastal engineers. Simultaneously, efforts to build an exascale machine and the resulting power constrained computing architectures have led to massive increases in the concurrency applications are expected to manage. These architectural shifts have in turn caused some groups to turn away from the traditional flat MPI or MPI+OpenMP programming models to more functional task-based programming models, designed specifically to be performant on these next generation architectures. The aim of this thesis is to utilize these new task-based programming models to accelerate DG simulations for coastal applications. We explore two strategies for accelerating the DG method for storm surge simulation.

The first strategy addresses load imbalance caused by coastal flooding. During the simulation of hurricane storm surge, cells are classified as either wet or dry. Dry cells can trivially update, while wet cells require full evaluation of the physics. As the storm makes landfall and causes flooding, this generates a load imbalance. We present two load balancing strategies---an asynchronous diffusion-based approach and semi-static approach---to optimize compute resource utilization. These load balancing strategies are analyzed using a discrete-event simulation that models the task-based storm surge simulation. We find speed-ups of up to 56% over the currently used mesh partitioning and up to 97% of the theoretical speed-up.

The second strategy focuses on a first order adaptive local timestepping scheme for nonlinear conservation laws. For problems such as hurricane storm surge, the global CFL timestepping constraint is overly stringent for the majority of cells. We present a timestepping scheme that allows cells to stably advance based on local stability constraints. Since allowable timestep sizes depend on the state of the solution, care must be taken not to incur causality errors. The algorithm is accompanied with a proof of formal correctness that ensures that with a sufficiently small minimum timestep, the solution exhibits desired characteristics such as a maximum principle and total variation stability. The algorithm is parallelized using a speculative discrete event simulator. Performance results show that the implementation recovers 59%-77% of the optimal speed-up.

Description

LCSH Subject Headings

Citation