Browsing by Subject "Parametric model reduction"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Data-driven parametric reduced-order models : operator inference for reactive flow applications(2023-08-10) McQuarrie, Shane Alexander, 1991-; Willcox, Karen; Ward, Rachel; Ghattas, Omar; Moser, Robert; Gunzburger, MaxThis work presents new approaches to model order reduction for a wide class of parameterized, time-dependent partial differential equations (PDEs). Our objective is to non-intrusively construct inexpensive computational models that can be solved rapidly to map parameter values to approximate PDE solutions. Such parameterized reduced-order models may be used as physics-based surrogates for uncertainty quantification and inverse problems that require many forward solves of parametric PDEs. Our approach is based on Operator Inference, a scientific machine learning framework combining data-driven learning and physics-based modeling. Traditional model order reduction methods use direct reductions of simulation codes to construct reduced-order models, but this is often infeasible for complex production-level codes. Operator Inference, by contrast, constructs reduced-order models using only knowledge of the structure of the governing equations for the physical system and available simulation data. The major contributions of this work are threefold: (i) improving the robustness and algorithmic scalability of the Operator Inference approach by requiring stability from the learned reduced-order models through appropriate regularization, (ii) efficiently quantifying the errors and uncertainties associated with learning reduced-order models from data alone, and (iii) adapting the Operator Inference framework to the parametric setting for a wide class of problems. A customizable regularization is introduced to the operator regression problem to avoid over-fitting, improve conditioning in the regression, and promote stability in the model. The task of determining an optimal regularization is posed as an optimization problem that balances training error and stability of long-time integration dynamics. This regularization has a statistical interpretation when the task of learning a reduced-order model from data is posed as a Bayesian inverse problem. In this setting, the resulting posterior distribution characterizes the operators defining the reduced-order model, hence the predictions subsequently issued by the reduced-order model are endowed with uncertainty. The statistical moments of these predictions are estimated via a Monte Carlo sampling of the posterior distribution. Since the reduced models are fast to solve, this sampling is computationally efficient. We further extend the framework to two classes of parametric problems: PDE systems with time-periodic solutions, and those with affine-parametric dependencies. In the first case, the theory of linear time-periodic systems motivates a linear reduced-order model with a new choice of inputs; in the second case, the parametric structure of the governing equations is embedded directly into the reduced-order model and the corresponding operator regression. We also state and prove conditions for well-posedness for the associated learning problem. Finally, we present an open-source implementation for this approach in Python. Our approach applies to a wide range of scientific and engineering problems and is demonstrated in this work for multiple problems, including the compressible Euler equations for an ideal gas, the FitzHugh-Nagumo neuron model, a single-injector combustion process, and a plasma flow model for a glow discharge device. With appropriate regularization and an informed selection of learning variables, the reduced-order models exhibit high accuracy in re-predicting the training regime and acceptable accuracy in predicting future dynamics, while achieving several orders of magnitude speedup in computational cost.