[1] |
Schwerdtner, P., Law, F., Wang, Q., Gazen, C., Chen, Y.F., Ihme, M. & Peherstorfer, B. Uncertainty quantification in coupled wildfire-atmosphere simulations at scale. PNAS Nexus, 2024. [Abstract]Abstract Uncertainties in wildfire simulations pose a major challenge for making decisions about fire management, mitigation, and evacuations. However, ensemble calculations to quantify uncertainties are prohibitively expensive with high-fidelity models that are needed to capture today's ever more intense and severe wildfires. This work shows that surrogate models trained on related data enable scaling multi-fidelity uncertainty quantification to high-fidelity wildfire simulations of unprecedented scale with billions of degrees of freedom. The key insight is that correlation is all that matters while bias is irrelevant for speeding up uncertainty quantification when surrogate models are combined with high-fidelity models in multi-fidelity approaches. This allows the surrogate models to be trained on abundantly available or cheaply generated related data samples that can be strongly biased as long as they are correlated to predictions of high-fidelity simulations. Numerical results with scenarios of the Tubbs 2017 wildfire demonstrate that surrogate models trained on related data make multi-fidelity uncertainty quantification in large-scale wildfire simulations practical by reducing the training time by several orders of magnitude from three months to under three hours and predicting the burned area at least twice as accurately compared to using high-fidelity simulations alone for a fixed computational budget. More generally, the results suggest that leveraging related data can greatly extend the scope of surrogate modeling, potentially benefiting other fields that require uncertainty quantification in computationally expensive high-fidelity simulations. [BibTeX]@article{Sch24FireUQ,
title = {Uncertainty quantification in coupled wildfire-atmosphere simulations at scale},
author = {Schwerdtner, P. and Law, F. and Wang, Q. and Gazen, C. and Chen, Y.F. and Ihme, M. and Peherstorfer, B.},
journal = {PNAS Nexus},
year = {2024},
} |
[2] |
Berman, J., Blickhan, T. & Peherstorfer, B. Parametric model reduction of mean-field and stochastic systems via higher-order action matching. NeurIPS, 2024. [Abstract]Abstract The aim of this work is to learn models of population dynamics of physical systems that feature stochastic and mean-field effects and that depend on physics parameters. The learned models can act as surrogates of classical numerical models to efficiently predict the system behavior over the physics parameters. Building on the Benamou-Brenier formula from optimal transport and action matching, we use a variational problem to infer parameter- and time-dependent gradient fields that represent approximations of the population dynamics. The inferred gradient fields can then be used to rapidly generate sample trajectories that mimic the dynamics of the physical system on a population level over varying physics parameters. We show that combining Monte Carlo sampling with higher-order quadrature rules is critical for accurately estimating the training objective from sample data and for stabilizing the training process. We demonstrate on Vlasov-Poisson instabilities as well as on high-dimensional particle and chaotic systems that our approach accurately predicts population dynamics over a wide range of parameters and outperforms state-of-the-art diffusion-based and flow-based modeling that simply condition on time and physics parameters. [BibTeX]@inproceedings{BBP24HOAM,
title = {Parametric model reduction of mean-field and stochastic systems via higher-order action matching},
author = {Berman, J. and Blickhan, T. and Peherstorfer, B.},
journal = {NeurIPS},
year = {2024},
} |
[3] |
Werner, S.W.R. & Peherstorfer, B. System stabilization with policy optimization on unstable latent manifolds. Computer Methods in Applied Mechanics and Engineering, 433, 2024. [Abstract]Abstract Stability is a basic requirement when studying the behavior of dynamical systems. However, stabilizing dynamical systems via reinforcement learning is challenging because only little data can be collected over short time horizons before instabilities are triggered and data become meaningless. This work introduces a reinforcement learning approach that is formulated over latent manifolds of unstable dynamics so that stabilizing policies can be trained from few data samples. The unstable manifolds are minimal in the sense that they contain the lowest dimensional dynamics that are necessary for learning policies that guarantee stabilization. This is in stark contrast to generic latent manifolds that aim to approximate all -- stable and unstable -- system dynamics and thus are higher dimensional and often require higher amounts of data. Experiments demonstrate that the proposed approach stabilizes even complex physical systems from few data samples for which other methods that operate either directly in the system state space or on generic latent manifolds fail. [BibTeX]@article{WP24Policy,
title = {System stabilization with policy optimization on unstable latent manifolds},
author = {Werner, S.W.R. and Peherstorfer, B.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {433},
year = {2024},
} |
[4] |
Maurais, A., Alsup, T., Peherstorfer, B. & Marzouk, Y. Multifidelity Covariance Estimation via Regression on the Manifold of Symmetric Positive Definite Matrices. SIAM Journal on Mathematics of Data Science, , 2024. (accepted). [Abstract]Abstract We introduce a multifidelity estimator of covariance matrices formulated as the solution to a regression problem on the manifold of symmetric positive definite matrices. The estimator is positive definite by construction, and the Mahalanobis distance minimized to obtain it possesses properties which enable practical computation. We show that our manifold regression multifidelity (MRMF) covariance estimator is a maximum likelihood estimator under a certain error model on manifold tangent space. More broadly, we show that our Riemannian regression framework encompasses existing multifidelity covariance estimators constructed from control variates. We demonstrate via numerical examples that our estimator can provide significant decreases, up to one order of magnitude, in squared estimation error relative to both single-fidelity and other multifidelity covariance estimators. Furthermore, preservation of positive definiteness ensures that our estimator is compatible with downstream tasks, such as data assimilation and metric learning, in which this property is essential. [BibTeX]@article{MAPM23CovarianceReg,
title = {Multifidelity Covariance Estimation via Regression on the Manifold of Symmetric Positive Definite Matrices},
author = {Maurais, A. and Alsup, T. and Peherstorfer, B. and Marzouk, Y.},
journal = {SIAM Journal on Mathematics of Data Science},
volume = {},
year = {2024},
} |
[5] |
Schwerdtner, P., Schulze, P., Berman, J. & Peherstorfer, B. Nonlinear embeddings for conserving Hamiltonians and other quantities with Neural Galerkin schemes. SIAM Journal on Scientific Computing, 2024. (accepted). [Abstract]Abstract This work focuses on the conservation of quantities such as Hamiltonians, mass, and momentum when solution fields of partial differential equations are approximated with nonlinear parametrizations such as deep networks. The proposed approach builds on Neural Galerkin schemes that are based on the Dirac--Frenkel variational principle to train nonlinear parametrizations sequentially in time. We first show that only adding constraints that aim to conserve quantities in continuous time can be insufficient because the nonlinear dependence on the parameters implies that even quantities that are linear in the solution fields become nonlinear in the parameters and thus are challenging to discretize in time. Instead, we propose Neural Galerkin schemes that compute at each time step an explicit embedding onto the manifold of nonlinearly parametrized solution fields to guarantee conservation of quantities. The embeddings can be combined with standard explicit and implicit time integration schemes. Numerical experiments demonstrate that the proposed approach conserves quantities up to machine precision. [BibTeX]@article{SSBP23NGE,
title = {Nonlinear embeddings for conserving Hamiltonians and other quantities with Neural Galerkin schemes},
author = {Schwerdtner, P. and Schulze, P. and Berman, J. and Peherstorfer, B.},
journal = {SIAM Journal on Scientific Computing},
year = {2024},
} |
[6] |
Berman, J. & Peherstorfer, B. CoLoRA: Continuous low-rank adaptation for reduced implicit neural modeling of parameterized partial differential equations. International Conference on Machine Learning (ICML), 2024. [Abstract]Abstract This work introduces reduced models based on Continuous Low Rank Adaptation (CoLoRA) that pre-train neural networks for a given partial differential equation and then continuously adapt low-rank weights in time to rapidly predict the evolution of solution fields at new physics parameters and new initial conditions. The adaptation can be either purely data-driven or via an equation-driven variational approach that provides Galerkin-optimal approximations. Because CoLoRA approximates solution fields locally in time, the rank of the weights can be kept small, which means that only few training trajectories are required offline so that CoLoRA is well suited for data-scarce regimes. Predictions with CoLoRA are orders of magnitude faster than with classical methods and their accuracy and parameter efficiency is higher compared to other neural network approaches. [BibTeX]@inproceedings{BP24COLORA,
title = {CoLoRA: Continuous low-rank adaptation for reduced implicit neural modeling of parameterized partial differential equations},
author = {Berman, J. and Peherstorfer, B.},
journal = {International Conference on Machine Learning (ICML)},
year = {2024},
} |
[7] |
Alsup, T., Hartland, T., Peherstorfer, B. & Petra, N. Further analysis of multilevel Stein variational gradient descent with an application to the Bayesian inference of glacier ice models. Advances in Computational Mathematics, 2024. [Abstract]Abstract Multilevel Stein variational gradient descent is a method for particle-based variational inference that leverages hierarchies of approximations of target distributions with varying costs and fidelity to computationally speed up inference. This work provides a cost complexity analysis of multilevel Stein variational gradient descent that applies under milder conditions than previous results, especially in discrete-in-time regimes and beyond the limited settings where Stein variational gradient descent achieves exponentially fast convergence. The analysis shows that the convergence rate of Stein variational gradient descent enters only as a constant factor for the cost complexity of the multilevel version, which means that the costs of the multilevel version scale independently of the convergence rate of Stein variational gradient descent on a single level. Numerical experiments with Bayesian inverse problems of inferring discretized basal sliding coefficient fields of the Arolla glacier ice demonstrate that multilevel Stein variational gradient descent achieves orders of magnitude speedups compared to its single-level version. [BibTeX]@article{AHPP22FurtherMLSVGD,
title = {Further analysis of multilevel Stein variational gradient descent with an application to the Bayesian inference of glacier ice models},
author = {Alsup, T. and Hartland, T. and Peherstorfer, B. and Petra, N.},
journal = {Advances in Computational Mathematics},
year = {2024},
} |
[8] |
Wen, Y., Vanden-Eijnden, E. & Peherstorfer, B. Coupling parameter and particle dynamics for adaptive sampling in Neural Galerkin schemes. Physica D, 2024. [Abstract]Abstract Training nonlinear parametrizations such as deep neural networks to numerically approximate solutions of partial differential equations is often based on minimizing a loss that includes the residual, which is analytically available in limited settings only. At the same time, empirically estimating the training loss is challenging because residuals and related quantities can have high variance, especially for transport-dominated and high-dimensional problems that exhibit local features such as waves and coherent structures. Thus, estimators based on data samples from un-informed, uniform distributions are inefficient. This work introduces Neural Galerkin schemes that estimate the training loss with data from adaptive distributions, which are empirically represented via ensembles of particles. The ensembles are actively adapted by evolving the particles with dynamics coupled to the nonlinear parametrizations of the solution fields so that the ensembles remain informative for estimating the training loss. Numerical experiments indicate that few dynamic particles are sufficient for obtaining accurate empirical estimates of the training loss, even for problems with local features and with high-dimensional spatial domains. [BibTeX]@article{WEP23NGSampling,
title = {Coupling parameter and particle dynamics for adaptive sampling in Neural Galerkin schemes},
author = {Wen, Y. and Vanden-Eijnden, E. and Peherstorfer, B.},
journal = {Physica D},
year = {2024},
} |
[9] |
Goyal, P., Peherstorfer, B. & Benner, P. Rank-Minimizing and Structured Model Inference. SIAM Journal on Scientific Computing, 2024. [Abstract]Abstract While extracting information from data with machine learning plays an increasingly important role, physical laws and other first principles continue to provide critical insights about systems and processes of interest in science and engineering. This work introduces a method that infers models from data with physical insights encoded in the form of structure and that minimizes the model order so that the training data are fitted well while redundant degrees of freedom without conditions and sufficient data to fix them are automatically eliminated. The models are formulated via solution matrices of specific instances of generalized Sylvester equations that enforce interpolation of the training data and relate the model order to the rank of the solution matrices. The proposed method numerically solves the Sylvester equations for minimal-rank solutions and so obtains models of low order. Numerical experiments demonstrate that the combination of structure preservation and rank minimization leads to accurate models with orders of magnitude fewer degrees of freedom than models of comparable prediction quality that are learned with structure preservation alone. [BibTeX]@article{GPB23Rank,
title = {Rank-Minimizing and Structured Model Inference},
author = {Goyal, P. and Peherstorfer, B. and Benner, P.},
journal = {SIAM Journal on Scientific Computing},
year = {2024},
} |
[10] |
Bruna, J., Peherstorfer, B. & Vanden-Eijnden, E. Neural Galerkin Scheme with Active Learning for High-Dimensional Evolution Equations. Journal of Computational Physics, 2023. [Abstract]Abstract Machine learning methods have been shown to give accurate predictions in high dimensions provided that sufficient training data are available. Yet, many interesting questions in science and engineering involve situations where initially no data are available and the principal aim is to gather insights from a known model. Here we consider this problem in the context of systems whose evolution can be described by partial differential equations (PDEs). We use deep learning to solve these equations by generating data on-the-fly when and where they are needed, without prior information about the solution. The proposed Neural Galerkin schemes derive nonlinear dynamical equations for the network weights by minimization of the residual of the time derivative of the solution, and solve these equations using standard integrators for initial value problems. The sequential learning of the weights over time allows for adaptive collection of new input data for residual estimation. This step uses importance sampling informed by the current state of the solution, in contrast with other machine learning methods for PDEs that optimize the network parameters globally in time. This active form of data acquisition is essential to enable the approximation power of the neural networks and to break the curse of dimensionality faced by non-adaptative learning strategies. The applicability of the method is illustrated on several numerical examples involving high-dimensional PDEs, including advection equations with many variables, as well as Fokker-Planck equations for systems with several interacting particles. [BibTeX]@article{BPE22NG,
title = {Neural Galerkin Scheme with Active Learning for High-Dimensional Evolution Equations},
author = {Bruna, J. and Peherstorfer, B. and Vanden-Eijnden, E.},
journal = {Journal of Computational Physics},
year = {2023},
} |
[11] |
Kramer, B., Peherstorfer, B. & Willcox, K. Learning Nonlinear Reduced Models from Data with Operator Inference. Annual Review of Fluid Mechanics, 56, 2024. [Abstract]Abstract This review discusses Operator Inference, a nonintrusive reduced modeling approach that incorporates physical governing equations by defining a structured polynomial form for the reduced model, and then learns the corresponding reduced operators from simulated training data. The polynomial model form of Operator Inference is sufficiently expressive to cover a wide range of nonlinear dynamics found in fluid mechanics and other fields of science and engineering, while still providing efficient reduced model computations. The learning steps of Operator Inference are rooted in classical projection-based model reduction; thus, some of the rich theory of model reduction can be applied to models learned with Operator Inference. This connection to projection-based model reduction theory offers a pathway toward deriving error estimates and gaining insights to improve predictions. Furthermore, through formulations of Operator Inference that preserve Hamiltonian and other structures, important physical properties such as energy conservation can be guaranteed in the predictions of the reduced model beyond the training horizon. This review illustrates key computational steps of Operator Inference through a large-scale combustion example. [BibTeX]@article{KPKOISurvey2024,
title = {Learning Nonlinear Reduced Models from Data with Operator Inference},
author = {Kramer, B. and Peherstorfer, B. and Willcox, K.},
journal = {Annual Review of Fluid Mechanics},
volume = {56},
year = {2024},
} |
[12] |
Berman, J. & Peherstorfer, B. Randomized Sparse Neural Galerkin Schemes for Solving Evolution Equations with Deep Networks. NeurIPS 2023 (spotlight). [Abstract]Abstract Training neural networks sequentially in time to approximate solution fields of time-dependent partial differential equations can be beneficial for preserving causality and other physics properties; however, the sequential-in-time training is numerically challenging because training errors quickly accumulate and amplify over time. This work introduces Neural Galerkin schemes that update randomized sparse subsets of network parameters at each time step. The randomization avoids overfitting locally in time and so helps prevent the error from accumulating quickly over the sequential-in-time training, which is motivated by dropout that addresses a similar issue of overfitting due to neuron co-adaptation. The sparsity of the update reduces the computational costs of training without losing expressiveness because many of the network parameters are redundant locally at each time step. In numerical experiments with a wide range of evolution equations, the proposed scheme with randomized sparse updates is up to two orders of magnitude more accurate at a fixed computational budget and up to two orders of magnitude faster at a fixed accuracy than schemes with dense updates. [BibTeX]@inproceedings{BP23RSNG,
title = {Randomized Sparse Neural Galerkin Schemes for Solving Evolution Equations with Deep Networks},
author = {Berman, J. and Peherstorfer, B.},
journal = {NeurIPS 2023},
} |
[13] |
Singh, R., Uy, W.I.T. & Peherstorfer, B. Lookahead data-gathering strategies for online adaptive model reduction of transport-dominated problems. Chaos: An Interdisciplinary Journal of Nonlinear Science, 2023. (accepted). [Abstract]Abstract Online adaptive model reduction efficiently reduces numerical models of transport-dominated problems by updating reduced spaces over time, which leads to nonlinear approximations on latent manifolds that can achieve a faster error decay than classical linear model reduction methods that keep reduced spaces fixed. Critical for online adaptive model reduction is coupling the full and reduced model to judiciously gather data from the full model for adapting the reduced spaces so that accurate approximations of the evolving full-model solution fields can be maintained. In this work, we introduce lookahead data-gathering strategies that predict the next state of the full model for adapting reduced spaces towards dynamics that are likely to be seen in the immediate future. Numerical experiments demonstrate that the proposed lookahead strategies lead to accurate reduced models even for problems where previously introduced data-gathering strategies that look back in time fail to provide predictive models. The proposed lookahead strategies also improve the robustness and stability of online adaptive reduced models. [BibTeX]@article{SUP23ADEIMAHEAD,
title = {Lookahead data-gathering strategies for online adaptive model reduction of transport-dominated problems},
author = {Singh, R. and Uy, W.I.T. and Peherstorfer, B.},
journal = {Chaos: An Interdisciplinary Journal of Nonlinear Science},
year = {2023},
} |
[14] |
Law, F., Cerfon, A., Peherstorfer, B. & Wechsung, F. Meta variance reduction for Monte Carlo estimation of energetic particle confinement during stellarator optimization. Journal of Computational Physics, 2023. [Abstract]Abstract This work introduces meta estimators that combine multiple multifidelity techniques based on control variates, importance sampling, and information reuse to yield a quasi-multiplicative amount of variance reduction. The proposed meta estimators are particularly efficient within outer-loop applications when the input distribution of the uncertainties changes during the outer loop, which is often the case in reliability-based design and shape optimization. We derive asymptotic bounds of the variance reduction of the meta estimators in the limit of convergence of the outer-loop results. We demonstrate the meta estimators, using data-driven surrogate models and biasing densities, on a design problem under uncertainty motivated by magnetic confinement fusion, namely the optimization of stellarator coil designs to maximize the estimated confinement of energetic particles. The meta estimators outperform all of their constituent variance reduction techniques alone, ultimately yielding two orders of magnitude speedup compared to standard Monte Carlo estimation at the same computational budget. [BibTeX] |
[15] |
Maurais, A., Alsup, T., Peherstorfer, B. & Marzouk, Y. Multi-fidelity covariance estimation in the log-Euclidean geometry. International Conference on Machine Learning (ICML), 2023. [Abstract]Abstract We introduce a multi-fidelity estimator of covariance matrices that employs the log-Euclidean geometry of the symmetric positive-definite manifold. The estimator fuses samples from a hierarchy of data sources of differing fidelities and costs for variance reduction while guaranteeing definiteness, in contrast with previous approaches. The new estimator makes covariance estimation tractable in applications where simulation or data collection is expensive; to that end, we develop an optimal sample allocation scheme that minimizes the mean-squared error of the estimator given a fixed budget. Guaranteed definiteness is crucial to metric learning, data assimilation, and other downstream tasks. Evaluations of our approach using data from physical applications (heat conduction, fluid dynamics) demonstrate more accurate metric learning and speedups of more than one order of magnitude compared to benchmarks. [BibTeX]@inproceedings{MAPY23LEMF,
title = {Multi-fidelity covariance estimation in the log-Euclidean geometry},
author = {Maurais, A. and Alsup, T. and Peherstorfer, B. and Marzouk, Y.},
journal = {International Conference on Machine Learning (ICML)},
year = {2023},
} |
[16] |
Uy, W.I.T., Hartmann, D. & Peherstorfer, B. Operator inference with roll outs for learning reduced models from scarce and low-quality data. Computers & Mathematics with Applications, 145, 2023. [Abstract]Abstract Data-driven modeling has become a key building block in computational science and engineering. However, data that are available in science and engineering are typically scarce, often polluted with noise and affected by measurement errors and other perturbations, which makes learning the dynamics of systems challenging. In this work, we propose to combine data-driven modeling via operator inference with the dynamic training via roll outs of neural ordinary differential equations. Operator inference with roll outs inherits interpretability, scalability, and structure preservation of traditional operator inference while leveraging the dynamic training via roll outs over multiple time steps to increase stability and robustness for learning from low-quality and noisy data. Numerical experiments with data describing shallow water waves and surface quasi-geostrophic dynamics demonstrate that operator inference with roll outs provides predictive models from training trajectories even if data are sampled sparsely in time and polluted with noise of up to 10%. [BibTeX]@article{UHP22OpInfRollOuts,
title = {Operator inference with roll outs for learning reduced models from scarce and low-quality data},
author = {Uy, W.I.T. and Hartmann, D. and Peherstorfer, B.},
journal = {Computers & Mathematics with Applications},
volume = {145},
year = {2023},
} |
[17] |
Uy, W.I.T., Wang, Y., Wen, Y. & Peherstorfer, B. Active operator inference for learning low-dimensional dynamical-system models from noisy data. SIAM Journal on Scientific Computing, 2023. (accepted). [Abstract]Abstract Noise poses a challenge for learning dynamical-system models because already small variations can distort the dynamics described by trajectory data. This work builds on operator inference from scientific machine learning to infer low-dimensional models from high-dimensional state trajectories polluted with noise. The presented analysis shows that, under certain conditions, the inferred operators are unbiased estimators of the well-studied projection-based reduced operators from traditional model reduction. Furthermore, the connection between operator inference and projection-based model reduction enables bounding the mean-squared errors of predictions made with the learned models with respect to traditional reduced models. The analysis also motivates an active operator inference approach that judiciously samples high-dimensional trajectories with the aim of achieving a low mean-squared error by reducing the effect of noise. Numerical experiments with high-dimensional linear and nonlinear state dynamics demonstrate that predictions obtained with active operator inference have orders of magnitude lower mean-squared errors than operator inference with traditional, equidistantly sampled trajectory data. [BibTeX]@article{UWWP21OpInfNoise,
title = {Active operator inference for learning low-dimensional dynamical-system models from noisy data},
author = {Uy, W.I.T. and Wang, Y. and Wen, Y. and Peherstorfer, B.},
journal = {SIAM Journal on Scientific Computing},
year = {2023},
} |
[18] |
Werner, S.W.R. & Peherstorfer, B. Context-aware controller inference for stabilizing dynamical systems from scarce data. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2023. (accepted). [Abstract]Abstract This work introduces a data-driven control approach for stabilizing high-dimensional dynamical systems from scarce data. The proposed context-aware controller inference approach is based on the observation that controllers need to act locally only on the unstable dynamics to stabilize systems. This means it is sufficient to learn the unstable dynamics alone, which are typically confined to much lower dimensional spaces than the high-dimensional state spaces of all system dynamics and thus few data samples are sufficient to identify them. Numerical experiments demonstrate that context-aware controller inference learns stabilizing controllers from orders of magnitude fewer data samples than traditional data-driven control techniques and variants of reinforcement learning. The experiments further show that the low data requirements of context-aware controller inference are especially beneficial in data-scarce engineering problems with complex physics, for which learning complete system dynamics is often intractable in terms of data and training costs. [BibTeX]@article{WP22ContextControllerInf,
title = {Context-aware controller inference for stabilizing dynamical systems from scarce data},
author = {Werner, S.W.R. and Peherstorfer, B.},
journal = {Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences},
year = {2023},
} |
[19] |
Farcas, I.G., Peherstorfer, B., Neckel, T., Jenko, F. & Bungartz, H.J. Context-aware learning of hierarchies of low-fidelity models for multi-fidelity uncertainty quantification. Computer Methods in Applied Mechanics and Engineering, 2023. (accepted). [Abstract]Abstract Multi-fidelity Monte Carlo methods leverage low-fidelity and surrogate models for variance reduction to make tractable uncertainty quantification even when numerically simulating the physical systems of interest with high-fidelity models is computationally expensive. This work proposes a context-aware multi-fidelity Monte Carlo method that optimally balances the costs of training low-fidelity models with the costs of Monte Carlo sampling. It generalizes the previously developed context-aware bi-fidelity Monte Carlo method to hierarchies of multiple models and to more general types of low-fidelity models. When training low-fidelity models, the proposed approach takes into account the context in which the learned low-fidelity models will be used, namely for variance reduction in Monte Carlo estimation, which allows it to find optimal trade-offs between training and sampling to minimize upper bounds of the mean-squared errors of the estimators for given computational budgets. This is in stark contrast to traditional surrogate modeling and model reduction techniques that construct low-fidelity models with the primary goal of approximating well the high-fidelity model outputs and typically ignore the context in which the learned models will be used in upstream tasks. The proposed context-aware multi-fidelity Monte Carlo method applies to hierarchies of a wide range of types of low-fidelity models such as sparse-grid and deep-network models. Numerical experiments with the gyrokinetic simulation code GENE show speedups of up to two orders of magnitude compared to standard estimators when quantifying uncertainties in small-scale fluctuations in confined plasma in fusion reactors. This corresponds to a runtime reduction from 72 days to about four hours on one node of the Lonestar6 supercomputer at the Texas Advanced Computing Center. [BibTeX]@article{FPNJB22CAMFMC,
title = {Context-aware learning of hierarchies of low-fidelity models for multi-fidelity uncertainty quantification},
author = {Farcas, I.G. and Peherstorfer, B. and Neckel, T. and Jenko, F. and Bungartz, H.J.},
journal = {Computer Methods in Applied Mechanics and Engineering},
year = {2023},
} |
[20] |
Rim, D., Peherstorfer, B. & Mandli, K.T. Manifold approximations via transported subspaces: Model reduction for transport-dominated problems. SIAM Journal on Scientific Computing, 45, 2023. [Abstract]Abstract This work presents a method for constructing online-efficient reduced models of large-scale systems governed by parametrized nonlinear scalar conservation laws. The solution manifolds induced by transported-dominated problems such as hyperbolic conservation laws typically exhibit nonlinear structures, which means that traditional model reduction methods based on linear approximations are inefficient when applied to these problems. In contrast, the approach introduced in this work derives reduced approximations that are nonlinear by explicitly composing global transport dynamics with locally linear approximations of the solution manifolds. A time-stepping scheme evolves the nonlinear reduced models by transporting local approximation spaces along the characteristic curves of the governing equations. The proposed computational procedure allows an offline/online decomposition and is online efficient in the sense that the costs of time-stepping the nonlinear reduced models are independent of the number of degrees of freedom of the full model. Numerical experiments with transport through heterogeneous media and the Burgers' equation show orders of magnitude speedups of the proposed nonlinear reduced models based on transported subspaces compared to traditional linear reduced models and full models. [BibTeX]@article{RPM19MATS,
title = {Manifold approximations via transported subspaces: Model reduction for transport-dominated problems},
author = {Rim, D. and Peherstorfer, B. and Mandli, K.T.},
journal = {SIAM Journal on Scientific Computing},
volume = {45},
year = {2023},
} |
[21] |
Werner, S.W.R. & Peherstorfer, B. On the sample complexity of stabilizing linear dynamical systems from data. Foundations of Computational Mathematics, 2022. (accepted). [Abstract]Abstract Learning controllers from data for stabilizing dynamical systems typically follows a two step process of first identifying a model and then constructing a controller based on the identified model. However, learning models means identifying generic descriptions of the dynamics of systems, which can require large amounts of data and extracting information that are unnecessary for the specific task of stabilization. The contribution of this work is to show that if a linear dynamical system has dimension (McMillan degree) n, then there always exist n states from which a stabilizing feedback controller can be constructed, independent of the dimension of the representation of the observed states and the number of inputs. By building on previous work, this finding implies that any linear dynamical system can be stabilized from fewer observed states than the minimal number of states required for learning a model of the dynamics. The theoretical findings are demonstrated with numerical experiments that show the stabilization of the flow behind a cylinder from less data than necessary for learning a model. [BibTeX]@article{WP22CSample,
title = {On the sample complexity of stabilizing linear dynamical systems from data},
author = {Werner, S.W.R. and Peherstorfer, B.},
journal = {Foundations of Computational Mathematics},
year = {2022},
} |
[22] |
Sawant, N., Kramer, B. & Peherstorfer, B. Physics-informed regularization and structure preservation for learning stable reduced models from data with operator inference. Computer Methods in Applied Mechanics and Engineering, 2022. (accepted). [Abstract]Abstract Operator inference learns low-dimensional dynamical-system models with polynomial nonlinear terms from trajectories of high-dimensional physical systems (non-intrusive model reduction). This work focuses on the large class of physical systems that can be well described by models with quadratic and cubic nonlinear terms and proposes a regularizer for operator inference that induces a stability bias onto learned models. The proposed regularizer is physics informed in the sense that it penalizes higher-order terms with large norms and so explicitly leverages the polynomial model form that is given by the underlying physics. This means that the proposed approach judiciously learns from data and physical insights combined, rather than from either data or physics alone. Additionally, a formulation of operator inference is proposed that enforces model constraints for preserving structure such as symmetry and definiteness in linear terms. Numerical results demonstrate that models learned with operator inference and the proposed regularizer and structure preservation are accurate and stable even in cases where using no regularization and Tikhonov regularization leads to models that are unstable. [BibTeX]@article{SKP21OpInfReg,
title = {Physics-informed regularization and structure preservation for learning stable reduced models from data with operator inference},
author = {Sawant, N. and Kramer, B. and Peherstorfer, B.},
journal = {Computer Methods in Applied Mechanics and Engineering},
year = {2022},
} |
[23] |
Werner, S.W.R., Overton, M.L. & Peherstorfer, B. Multi-fidelity robust controller design with gradient sampling. SIAM Journal on Scientific Computing, 2022. (accepted). [Abstract]Abstract Robust controllers that stabilize dynamical systems even under disturbances and noise are often formulated as solutions of nonsmooth, nonconvex optimization problems. While methods such as gradient sampling can handle the nonconvexity and nonsmoothness, the costs of evaluating the objective function may be substantial, making robust control challenging for dynamical systems with high-dimensional state spaces. In this work, we introduce multi-fidelity variants of gradient sampling that leverage low-cost, low-fidelity models with low-dimensional state spaces for speeding up the optimization process while nonetheless providing convergence guarantees for a high-fidelity model of the system of interest, which is primarily accessed only in the last phase of the optimization process. Our first multi-fidelity method initiates gradient sampling on higher fidelity models with starting points obtained from cheaper, lower fidelity models. Our second multi-fidelity method relies on ensembles of gradients that are computed from low- and high-fidelity models. Numerical experiments with controlling the cooling of a steel rail profile and laminar flow in a cylinder wake demonstrate that our new multi-fidelity gradient sampling methods achieve up to two orders of magnitude speedup compared to the single-fidelity gradient sampling method that relies on the high-fidelity model alone. [BibTeX]@article{WOP22MFControl,
title = {Multi-fidelity robust controller design with gradient sampling},
author = {Werner, S.W.R. and Overton, M.L. and Peherstorfer, B.},
journal = {SIAM Journal on Scientific Computing},
year = {2022},
} |
[24] |
Alsup, T. & Peherstorfer, B. Context-aware surrogate modeling for balancing approximation and sampling costs in multi-fidelity importance sampling and Bayesian inverse problems. SIAM/ASA Journal on Uncertainty Quantification, 2022. (accepted). [Abstract]Abstract Multi-fidelity methods leverage low-cost surrogate models to speed up computations and make occasional recourse to expensive high-fidelity models to establish accuracy guarantees. Because surrogate and high-fidelity models are used together, poor predictions by surrogate models can be compensated with frequent recourse to high-fidelity models. Thus, there is a trade-off between investing computational resources to improve the accuracy of surrogate models versus simply making more frequent recourse to expensive high-fidelity models; however, this trade-off is ignored by traditional modeling methods that construct surrogate models that are meant to replace high-fidelity models rather than being used together with high-fidelity models. This work considers multi-fidelity importance sampling and theoretically and computationally trades off increasing the fidelity of surrogate models for constructing more accurate biasing densities and the numbers of samples that are required from the high-fidelity models to compensate poor biasing densities. Numerical examples demonstrate that such context-aware surrogate models for multi-fidelity importance sampling have lower fidelity than what typically is set as tolerance in traditional model reduction, leading to runtime speedups of up to one order of magnitude in the presented examples. [BibTeX]@article{AP20Context,
title = {Context-aware surrogate modeling for balancing approximation and sampling costs in multi-fidelity importance sampling and Bayesian inverse problems},
author = {Alsup, T. and Peherstorfer, B.},
journal = {SIAM/ASA Journal on Uncertainty Quantification},
year = {2022},
} |
[25] |
Peherstorfer, B. Breaking the Kolmogorov Barrier with Nonlinear Model Reduction. Notices of the American Mathematical Society, 69:725-733, 2022. [Abstract]Abstract
[BibTeX]@article{P22AMS,
title = {Breaking the Kolmogorov Barrier with Nonlinear Model Reduction},
author = {Peherstorfer, B.},
journal = {Notices of the American Mathematical Society},
volume = {69},
pages = {725-733},
year = {2022},
} |
[26] |
Law, F., Cerfon, A. & Peherstorfer, B. Accelerating the estimation of collisionless energetic particle confinement statistics in stellarators using multifidelity Monte Carlo. Nuclear Fusion, 2022. (accepted). [Abstract]Abstract In the design of stellarators, energetic particle confinement is a critical point of concern which remains challenging to study from a numerical point of view. Standard Monte Carlo analyses are highly expensive because a large number of particle trajectories need to be integrated over long time scales, and small time steps must be taken to accurately capture the features of the wide variety of trajectories. Even when they are based on guiding center trajectories, as opposed to full-orbit trajectories, these standard Monte Carlo studies are too expensive to be included in most stellarator optimization codes. We present the first multifidelity Monte Carlo scheme for accelerating the estimation of energetic particle confinement in stellarators. Our approach relies on a two-level hierarchy, in which a guiding center model serves as the high-fidelity model, and a data-driven linear interpolant is leveraged as the low-fidelity surrogate model. We apply multifidelity Monte Carlo to the study of energetic particle confinement in a 4-period quasi-helically symmetric stellarator, assessing various metrics of confinement. Stemming from the very high computational efficiency of our surrogate model as well as its sufficient correlation to the high-fidelity model, we obtain speedups of up to 10 with multifidelity Monte Carlo compared to standard Monte Carlo. [BibTeX]@article{LCP21MFMCParticle,
title = {Accelerating the estimation of collisionless energetic particle confinement statistics in stellarators using multifidelity Monte Carlo},
author = {Law, F. and Cerfon, A. and Peherstorfer, B.},
journal = {Nuclear Fusion},
year = {2022},
} |
[27] |
Konrad, J., Farcas, I.G., Peherstorfer, B., Siena, A.D., Jenko, F., Neckel, T. & Bungartz, H.J. Data-driven low-fidelity models for multi-fidelity Monte Carlo sampling in plasma micro-turbulence analysis. Journal of Computational Physics, 2021. (accepted). [Abstract]Abstract The linear micro-instabilities driving turbulent transport in magnetized fusion plasmas (as well as the respective nonlinear saturation mechanisms) are known to be sensitive with respect to various physical parameters characterizing the background plasma and the magnetic equilibrium. Therefore, uncertainty quantification is essential for achieving predictive numerical simulations of plasma turbulence. However, the high computational costs of the required gyrokinetic simulations and the large number of parameters render standard Monte Carlo techniques intractable. To address this problem, we propose a multi-fidelity Monte Carlo approach in which we employ data-driven low-fidelity models that exploit the structure of the underlying problem such as low intrinsic dimension and anisotropic coupling of the stochastic inputs. The low-fidelity models are efficiently constructed via sensitivity-driven dimension-adaptive sparse grid interpolation using both the full set of uncertain inputs and subsets comprising only selected, important parameters. We illustrate the power of this method by applying it to two plasma turbulence problems with up to 14 stochastic parameters, demonstrating that it is up to four orders of magnitude more efficient than standard Monte Carlo methods measured in single-core performance, which translates into a runtime reduction from around eight days to one hour on 240 cores on parallel machines. [BibTeX]@article{Konrad21MFMCPlasma,
title = {Data-driven low-fidelity models for multi-fidelity Monte Carlo sampling in plasma micro-turbulence analysis},
author = {Konrad, J. and Farcas, I.G. and Peherstorfer, B. and Siena, A.D. and Jenko, F. and Neckel, T. and Bungartz, H.J.},
journal = {Journal of Computational Physics},
year = {2021},
} |
[28] |
Alsup, T., Venturi, L. & Peherstorfer, B. Multilevel Stein variational gradient descent with applications to Bayesian inverse problems. In Mathematical and Scientific Machine Learning (MSML) 2021, 2021. [Abstract]Abstract This work presents a multilevel variant of Stein variational gradient descent to more efficiently sample from target distributions. The key ingredient is a sequence of distributions with growing fidelity and costs that converges to the target distribution of interest. For example, such a sequence of distributions is given by a hierarchy of ever finer discretization levels of the forward model in Bayesian inverse problems. The proposed multilevel Stein variational gradient descent moves most of the iterations to lower, cheaper levels with the aim of requiring only a few iterations on the higher, more expensive levels when compared to the traditional, single-level Stein variational gradient descent variant that uses the highest-level distribution only. Under certain assumptions, in the mean-field limit, the error of the proposed multilevel Stein method decays by a log factor faster than the error of the single-level counterpart with respect to computational costs. Numerical experiments with Bayesian inverse problems show speedups of more than one order of magnitude of the proposed multilevel Stein method compared to the single-level variant that uses the highest level only. [BibTeX]@inproceedings{AVP21MLSVGD,
title = {Multilevel Stein variational gradient descent with applications to Bayesian inverse problems},
author = {Alsup, T. and Venturi, L. and Peherstorfer, B.},
year = {2021},
booktitle = {Mathematical and Scientific Machine Learning (MSML) 2021},
} |
[29] |
Otness, K., Gjoka, A., Bruna, J., Panozzo, D., Peherstorfer, B., Schneider, T. & Zorin, D. An Extensible Benchmark Suite for Learning to Simulate Physical Systems. In NeurIPS 2021 Track Datasets and Benchmarks, 2021. (accepted). [Abstract]Abstract Simulating physical systems is a core component of scientific computing, encompassing a wide range of physical domains and applications. Recently, there has been a surge in data-driven methods to complement traditional numerical simulations methods, motivated by the opportunity to reduce computational costs and/or learn new physical models leveraging access to large collections of data. However, the diversity of problem settings and applications has led to a plethora of approaches, each one evaluated on a different setup and with different evaluation metrics. We introduce a set of benchmark problems to take a step towards unified benchmarks and evaluation protocols. We propose four representative physical systems, as well as a collection of both widely used classical time integrators and representative data-driven methods (kernel-based, MLP, CNN, Nearest-Neighbors). Our framework allows to evaluate objectively and systematically the stability, accuracy, and computational efficiency of data-driven methods. Additionally, it is configurable to permit adjustments for accommodating other learning tasks and for establishing a foundation for future developments in machine learning for scientific computing [BibTeX]@inproceedings{OGBPPSZ21BenchmarkSuitePhysicsML,
title = {An Extensible Benchmark Suite for Learning to Simulate Physical Systems},
author = {Otness, K. and Gjoka, A. and Bruna, J. and Panozzo, D. and Peherstorfer, B. and Schneider, T. and Zorin, D.},
year = {2021},
booktitle = {NeurIPS 2021 Track Datasets and Benchmarks},
} |
[30] |
Uy, W.I.T. & Peherstorfer, B. Operator inference of non-Markovian terms for learning reduced models from partially observed state trajectories. Journal of Scientific Computing, 2021. (accepted). [Abstract]Abstract This work introduces a non-intrusive model reduction approach for learning reduced models from partially observed state trajectories of high-dimensional dynamical systems. The proposed approach compensates for the loss of information due to the partially observed states by constructing non-Markovian reduced models that make future-state predictions based on a history of reduced states, in contrast to traditional Markovian reduced models that rely on the current reduced state alone to predict the next state. The core contributions of this work are a data sampling scheme to sample partially observed states from high-dimensional dynamical systems and a formulation of a regression problem to fit the non-Markovian reduced terms to the sampled states. Under certain conditions, the proposed approach recovers from data the very same non-Markovian terms that one obtains with intrusive methods that require the governing equations and discrete operators of the high-dimensional dynamical system. Numerical results demonstrate that the proposed approach leads to non-Markovian reduced models that are predictive far beyond the training regime. Additionally, in the numerical experiments, the proposed approach learns non-Markovian reduced models from trajectories with only 20% observed state components that are about as accurate as traditional Markovian reduced models fitted to trajectories with 99% observed components. [BibTeX]@article{UP21NonMarkovian,
title = {Operator inference of non-Markovian terms for learning reduced models from partially observed state trajectories},
author = {Uy, W.I.T. and Peherstorfer, B.},
journal = {Journal of Scientific Computing},
year = {2021},
} |
[31] |
Uy, W.I.T. & Peherstorfer, B. Probabilistic error estimation for non-intrusive reduced models learned from data of systems governed by linear parabolic partial differential equations. ESAIM: Mathematical Modelling and Numerical Analysis (M2AN), 2021. (accepted). [Abstract]Abstract This work derives a residual-based a posteriori error estimator for reduced models learned with non-intrusive model reduction from data of high-dimensional systems governed by linear parabolic partial differential equations with control inputs. It is shown that quantities that are necessary for the error estimator can be either obtained exactly as the solutions of least-squares problems in a non-intrusive way from data such as initial conditions, control inputs, and high-dimensional solution trajectories or bounded in a probabilistic sense. The computational procedure follows an offline/online decomposition. In the offline (training) phase, the high-dimensional system is judiciously solved in a black-box fashion to generate data and to set up the error estimator. In the online phase, the estimator is used to bound the error of the reduced-model predictions for new initial conditions and new control inputs without recourse to the high-dimensional system. Numerical results demonstrate the workflow of the proposed approach from data to reduced models to certified predictions. [BibTeX]@article{UP20OpInfError,
title = {Probabilistic error estimation for non-intrusive reduced models learned from data of systems governed by linear parabolic partial differential equations},
author = {Uy, W.I.T. and Peherstorfer, B.},
journal = {ESAIM: Mathematical Modelling and Numerical Analysis (M2AN)},
year = {2021},
} |
[32] |
Peherstorfer, B. Sampling low-dimensional Markovian dynamics for pre-asymptotically recovering reduced models from data with operator inference. SIAM Journal on Scientific Computing, 42:A3489-A3515, 2020. [Abstract]Abstract This work introduces a method for learning low-dimensional models from data of high-dimensional black-box dynamical systems. The novelty is that the learned models are exactly the reduced models that are traditionally constructed with model reduction techniques that require full knowledge of governing equations and operators of the high-dimensional systems. Thus, the learned models are guaranteed to inherit the well-studied properties of reduced models from traditional model reduction. The key ingredient is a new data sampling scheme to obtain re-projected trajectories of high-dimensional systems that correspond to Markovian dynamics in low-dimensional subspaces. The exact recovery of reduced models from these re-projected trajectories is guaranteed pre-asymptotically under certain conditions for finite amounts of data and for a large class of systems with polynomial nonlinear terms. Numerical results demonstrate that the low-dimensional models learned with the proposed approach match reduced models from traditional model reduction up to numerical errors in practice. The numerical results further indicate that low-dimensional models fitted to re-projected trajectories are predictive even in situations where models fitted to trajectories without re-projection are inaccurate and unstable. [BibTeX]@article{P19ReProj,
title = {Sampling low-dimensional Markovian dynamics for pre-asymptotically recovering reduced models from data with operator inference},
author = {Peherstorfer, B.},
journal = {SIAM Journal on Scientific Computing},
volume = {42},
pages = {A3489-A3515},
year = {2020},
} |
[33] |
Drmac, Z. & Peherstorfer, B. Learning low-dimensional dynamical-system models from noisy frequency-response data with Loewner rational interpolation. In Realization and Model Reduction of Dynamical Systems: A Festschrift in Honor of the 70th Birthday of Thanos Antoulas, Springer, 2020. [Abstract]Abstract Loewner rational interpolation provides a versatile tool to learn low-dimensional dynamical-system models from frequency-response measurements. This work investigates the robustness of the Loewner approach to noise. The key finding is that if the measurements are polluted with Gaussian noise, then the error due to noise grows at most linearly with the standard deviation with high probability under certain conditions. The analysis gives insights into making the Loewner approach robust against noise via linear transformations and judicious selections of measurements. Numerical results demonstrate the linear growth of the error on benchmark examples. [BibTeX]@inproceedings{DP19LoewnerNoise,
title = {Learning low-dimensional dynamical-system models from noisy frequency-response data with Loewner rational interpolation},
author = {Drmac, Z. and Peherstorfer, B.},
year = {2020},
booktitle = {Realization and Model Reduction of Dynamical Systems: A Festschrift in Honor of the 70th Birthday of Thanos Antoulas},
publisher = {Springer},
} |
[34] |
Benner, P., Goyal, P., Kramer, B., Peherstorfer, B. & Willcox, K. Operator inference for non-intrusive model reduction of systems with non-polynomial nonlinear terms. Computer Methods in Applied Mechanics and Engineering, 372, 2020. [Abstract]Abstract This work presents a non-intrusive model reduction method to learn low-dimensional models of dynamical systems with non-polynomial nonlinear terms that are spatially local and that are given in analytic form. In contrast to state-of-the-art model reduction methods that are intrusive and thus require full knowledge of the governing equations and the operators of a full model of the discretized dynamical system, the proposed approach requires only the non-polynomial terms in analytic form and learns the rest of the dynamics from snapshots computed with a potentially black-box full-model solver. The proposed method learns operators for the linear and polynomially nonlinear dynamics via a least-squares problem, where the given non-polynomial terms are incorporated in the right-hand side. The least-squares problem is linear and thus can be solved efficiently in practice. The proposed method is demonstrated on three problems governed by partial differential equations, namely the diffusion-reaction Chafee-Infante model, a tubular reactor model for reactive flows, and a batch-chromatography model that describes a chemical separation process. The numerical results provide evidence that the proposed approach learns reduced models that achieve comparable accuracy as models constructed with state-of-the-art intrusive model reduction methods that require full knowledge of the governing equations. [BibTeX]@article{BGKPW20OpInfNonPoly,
title = {Operator inference for non-intrusive model reduction of systems with non-polynomial nonlinear terms},
author = {Benner, P. and Goyal, P. and Kramer, B. and Peherstorfer, B. and Willcox, K.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {372},
year = {2020},
} |
[35] |
Peherstorfer, B., Drmac, Z. & Gugercin, S. Stability of discrete empirical interpolation and gappy proper orthogonal decomposition with randomized and deterministic sampling points. SIAM Journal on Scientific Computing, 42:A2837-A2864, 2020. [Abstract]Abstract This work investigates the stability of (discrete) empirical interpolation for nonlinear model reduction and state field approximation from measurements. Empirical interpolation derives approximations from a few samples (measurements) via interpolation in low-dimensional spaces. It has been observed that empirical interpolation can become unstable if the samples are perturbed due to, e.g., noise, turbulence, and numerical inaccuracies. The main contribution of this work is a probabilistic analysis that shows that stable approximations are obtained if samples are randomized and if more samples than dimensions of the low-dimensional spaces are used. Oversampling, i.e., taking more sampling points than dimensions of the low-dimensional spaces, leads to approximations via regression and is known under the name of gappy proper orthogonal decomposition. Building on the insights of the probabilistic analysis, a deterministic sampling strategy is presented that aims to achieve lower approximation errors with fewer points than randomized sampling by taking information about the low-dimensional spaces into account. Numerical results of reconstructing velocity fields from noisy measurements of combustion processes and model reduction in the presence of noise demonstrate the instability of empirical interpolation and the stability of gappy proper orthogonal decomposition with oversampling. [BibTeX]@article{PDG18ODEIM,
title = {Stability of discrete empirical interpolation and gappy proper orthogonal decomposition with randomized and deterministic sampling points},
author = {Peherstorfer, B. and Drmac, Z. and Gugercin, S.},
journal = {SIAM Journal on Scientific Computing},
volume = {42},
pages = {A2837-A2864},
year = {2020},
} |
[36] |
Peherstorfer, B. Model reduction for transport-dominated problems via online adaptive bases and adaptive sampling. SIAM Journal on Scientific Computing, 42:A2803-A2836, 2020. [Abstract]Abstract This work presents a model reduction approach for problems with coherent structures that propagate over time such as convection-dominated flows and wave-type phenomena. Traditional model reduction methods have difficulties with these transport-dominated problems because propagating coherent structures typically introduce high-dimensional features that require high-dimensional approximation spaces. The approach proposed in this work exploits the locality in space and time of propagating coherent structures to derive efficient reduced models. First, full-model solutions are approximated locally in time via local reduced spaces that are adapted with basis updates during time stepping. The basis updates are derived from querying the full model at a few selected spatial coordinates. Second, the locality in space of the coherent structures is exploited via an adaptive sampling scheme that selects at which components to query the full model for computing the basis updates. Our analysis shows that, in probability, the more local the coherent structure is in space, the fewer full-model samples are required to adapt the reduced basis with the proposed adaptive sampling scheme. Numerical results on benchmark examples with interacting wave-type structures and time-varying transport speeds and on a model combustor of a single-element rocket engine demonstrate the wide applicability of our approach and the significant runtime speedups compared to full models and traditional reduced models. [BibTeX]@article{P18AADEIM,
title = {Model reduction for transport-dominated problems via online adaptive bases and adaptive sampling},
author = {Peherstorfer, B.},
journal = {SIAM Journal on Scientific Computing},
volume = {42},
pages = {A2803-A2836},
year = {2020},
} |
[37] |
Qian, E., Kramer, B., Peherstorfer, B. & Willcox, K. Lift & Learn: Physics-informed machine learning for large-scale nonlinear dynamical systems. Physica D: Nonlinear Phenomena, Volume 406, 2020. [Abstract]Abstract We present Lift & Learn, a physics-informed method for learning low-dimensional models for large-scale dynamical systems. The method exploits knowledge of a system's governing equations to identify a coordinate transformation in which the system dynamics have quadratic structure. This transformation is called a lifting map because it often adds auxiliary variables to the system state. The lifting map is applied to data obtained by evaluating a model for the original nonlinear system. This lifted data is projected onto its leading principal components, and low-dimensional linear and quadratic matrix operators are fit to the lifted reduced data using a least-squares operator inference procedure. Analysis of our method shows that the Lift & Learn models are able to capture the system physics in the lifted coordinates at least as accurately as traditional intrusive model reduction approaches. This preservation of system physics makes the Lift & Learn models robust to changes in inputs. Numerical experiments on the FitzHugh-Nagumo neuron activation model and the compressible Euler equations demonstrate the generalizability of our model. [BibTeX]@article{QKPW19LiftLearn,
title = {Lift & Learn: Physics-informed machine learning for large-scale nonlinear dynamical systems},
author = {Qian, E. and Kramer, B. and Peherstorfer, B. and Willcox, K.},
journal = {Physica D: Nonlinear Phenomena},
volume = {Volume 406},
year = {2020},
} |
[38] |
Peherstorfer, B. & Marzouk, Y. A transport-based multifidelity preconditioner for Markov chain Monte Carlo. Advances in Computational Mathematics, 45:2321-2348, 2019. [Abstract]Abstract Markov chain Monte Carlo (MCMC) sampling of posterior distributions arising in Bayesian inverse problems is challenging when evaluations of the forward model are computationally expensive. Replacing the forward model with a low-cost, low-fidelity model often significantly reduces computational cost; however, employing a low-fidelity model alone means that the stationary distribution of the MCMC chain is the posterior distribution corresponding to the low-fidelity model, rather than the original posterior distribution corresponding to the high-fidelity model. We propose a multifidelity approach that combines, rather than replaces, the high-fidelity model with a low-fidelity model. First, the low-fidelity model is used to construct a transport map that deterministically couples a reference Gaussian distribution with an approximation of the low-fidelity posterior. Then, the high-fidelity posterior distribution is explored using a non-Gaussian proposal distribution derived from the transport map. This multifidelity preconditioned MCMC approach seeks efficient sampling via a proposal that is explicitly tailored to the posterior at hand and that is constructed efficiently with the low-fidelity model. By relying on the low-fidelity model only to construct the proposal distribution, our approach guarantees that the stationary distribution of the MCMC chain is the high-fidelity posterior. In our numerical examples, our multifidelity approach achieves significant speedups compared to single-fidelity MCMC sampling methods. [BibTeX]@article{PM18MultiTM,
title = {A transport-based multifidelity preconditioner for Markov chain Monte Carlo},
author = {Peherstorfer, B. and Marzouk, Y.},
journal = {Advances in Computational Mathematics},
volume = {45},
pages = {2321-2348},
year = {2019},
} |
[39] |
Peherstorfer, B. Multifidelity Monte Carlo estimation with adaptive low-fidelity models. SIAM/ASA Journal on Uncertainty Quantification, 7:579-603, 2019. [Abstract]Abstract Multifidelity Monte Carlo (MFMC) estimation combines low- and high-fidelity models to speedup the estimation of statistics of the high-fidelity model outputs. MFMC optimally samples the low- and high-fidelity models such that the MFMC estimator has minimal mean-squared error for a given computational budget. In the setup of MFMC, the low-fidelity models are static, i.e., they are given and fixed and cannot be changed and adapted. We introduce the adaptive MFMC (AMFMC) method that splits the computational budget between adapting the low-fidelity models to improve their approximation quality and sampling the low- and high-fidelity models to reduce the mean-squared error of the estimator. Our AMFMC approach derives the quasi-optimal balance between adaptation and sampling in the sense that our approach minimizes an upper bound of the mean-squared error, instead of the error directly. We show that the quasi-optimal number of adaptations of the low-fidelity models is bounded even in the limit case that an infinite budget is available. This shows that adapting low-fidelity models in MFMC beyond a certain approximation accuracy is unnecessary and can even be wasteful. Our AMFMC approach trades-off adaptation and sampling and so avoids over-adaptation of the low-fidelity models. Besides the costs of adapting low-fidelity models, our AMFMC approach can also take into account the costs of the initial construction of the low-fidelity models (``offline costs''), which is critical if low-fidelity models are computationally expensive to build such as reduced models and data-fit surrogate models. Numerical results demonstrate that our adaptive approach can achieve orders of magnitude speedups compared to MFMC estimators with static low-fidelity models and compared to Monte Carlo estimators that use the high-fidelity model alone. [BibTeX]@article{P19AMFMC,
title = {Multifidelity Monte Carlo estimation with adaptive low-fidelity models},
author = {Peherstorfer, B.},
journal = {SIAM/ASA Journal on Uncertainty Quantification},
volume = {7},
pages = {579-603},
year = {2019},
} |
[40] |
Kramer, B., Marques, A., Peherstorfer, B., Villa, U. & Willcox, K. Multifidelity probability estimation via fusion of estimators. Journal of Computational Physics, 392:385-402, 2019. [Abstract]Abstract This paper develops a multifidelity method that enables estimation of failure probabilities for expensive-to-evaluate models via information fusion and importance sampling. The presented general fusion method combines multiple probability estimators with the goal of variance reduction. We use low-fidelity models to derive biasing densities for importance sampling and then fuse the importance sampling estimators such that the fused multifidelity estimator is unbiased and has mean-squared error lower than or equal to that of any of the importance sampling estimators alone. By fusing all available estimators, the method circumvents the challenging problem of selecting the best biasing density and using only that density for sampling. A rigorous analysis shows that the fused estimator is optimal in the sense that it has minimal variance amongst all possible combinations of the estimators. The asymptotic behavior of the proposed method is demonstrated on a convection-diffusion-reaction partial differential equation model for which 1e+5 samples can be afforded. To illustrate the proposed method at scale, we consider a model of a free plane jet and quantify how uncertainties at the flow inlet propagate to a quantity of interest related to turbulent mixing. Compared to an importance sampling estimator that uses the high-fidelity model alone, our multifidelity estimator reduces the required CPU time by 65% while achieving a similar coefficient of variation. [BibTeX]@article{KMPVW17Fusion,
title = {Multifidelity probability estimation via fusion of estimators},
author = {Kramer, B. and Marques, A. and Peherstorfer, B. and Villa, U. and Willcox, K.},
volume = {392},
pages = {385-402},
year = {2019},
institution = {Journal of Computational Physics},
} |
[41] |
Swischuk, R., Mainini, L., Peherstorfer, B. & Willcox, K. Projection-based model reduction: Formulations for physics-based machine learning. Computers & Fluids, 179:704-717, 2019. [Abstract]Abstract This paper considers the creation of parametric surrogate models for applications in science and engineering where the goal is to predict high-dimensional output quantities of interest, such as pressure, temperature and strain fields. The proposed methodology develops a low-dimensional parametrization of these quantities of interest using the proper orthogonal decomposition (POD), and combines this parametrization with machine learning methods to learn the map between the input parameters and the POD expansion coefficients. The use of particular solutions in the POD expansion provides a way to embed physical constraints, such as boundary conditions and other features of the solution that must be preserved. The relative costs and effectiveness of four different machine learning techniques—neural networks, multivariate polynomial regression, k-nearest-neighbors and decision trees—are explored through two engineering examples. The first example considers prediction of the pressure field around an airfoil, while the second considers prediction of the strain field over a damaged composite panel. The case studies demonstrate the importance of embedding physical constraints within learned models, and also highlight the important point that the amount of model training data available in an engineering setting is often much less than it is in other machine learning applications, making it essential to incorporate knowledge from physical models. [BibTeX]@article{SMPK18PhysicsLearning,
title = {Projection-based model reduction: Formulations for physics-based machine learning},
author = {Swischuk, R. and Mainini, L. and Peherstorfer, B. and Willcox, K.},
journal = {Computers & Fluids},
volume = {179},
pages = {704-717},
year = {2019},
} |
[42] |
Peherstorfer, B., Kramer, B. & Willcox, K. Multifidelity preconditioning of the cross-entropy method for rare event simulation and failure probability estimation. SIAM/ASA Journal on Uncertainty Quantification, 6(2):737-761, 2018. [Abstract]Abstract Accurately estimating rare event probabilities with Monte Carlo can become costly if for each sample a computationally expensive high-fidelity model evaluation is necessary to approximate the system response. Variance reduction with importance sampling significantly reduces the number of required samples if a suitable biasing density is used. This work introduces a multifidelity approach that leverages a hierarchy of low-cost surrogate models to efficiently construct biasing densities for importance sampling. Our multifidelity approach is based on the cross-entropy method that derives a biasing density via an optimization problem. We approximate the solution of the optimization problem at each level of the surrogate-model hierarchy, reusing the densities found on the previous levels to precondition the optimization problem on the subsequent levels. With the preconditioning, an accurate approximation of the solution of the optimization problem at each level can be obtained from a few model evaluations only. In particular, at the highest level, only few evaluations of the computationally expensive high-fidelity model are necessary. Our numerical results demonstrate that our multifidelity approach achieves speedups of several orders of magnitude in a thermal and a reacting-flow example compared to the single-fidelity cross-entropy method that uses a single model alone. [BibTeX]@article{PKW17MFCE,
title = {Multifidelity preconditioning of the cross-entropy method for rare event simulation and failure probability estimation},
author = {Peherstorfer, B. and Kramer, B. and Willcox, K.},
journal = {SIAM/ASA Journal on Uncertainty Quantification},
volume = {6},
number = {2},
pages = {737-761},
year = {2018},
} |
[43] |
Peherstorfer, B., Gunzburger, M. & Willcox, K. Convergence analysis of multifidelity Monte Carlo estimation. Numerische Mathematik, 139(3):683-707, 2018. [Abstract]Abstract The multifidelity Monte Carlo method provides a general framework for combining cheap low-fidelity approximations of an expensive high-fidelity model to accelerate the Monte Carlo estimation of statistics of the high-fidelity model output. In this work, we investigate the properties of multifidelity Monte Carlo estimation in the setting where a hierarchy of approximations can be constructed with known error and cost bounds. Our main result is a convergence analysis of multifidelity Monte Carlo estimation, for which we prove a bound on the costs of the multifidelity Monte Carlo estimator under assumptions on the error and cost bounds of the low-fidelity approximations. The assumptions that we make are typical in the setting of similar Monte Carlo techniques. Numerical experiments illustrate the derived bounds. [BibTeX]@article{PWK16MFMCAsymptotics,
title = {Convergence analysis of multifidelity Monte Carlo estimation},
author = {Peherstorfer, B. and Gunzburger, M. and Willcox, K.},
journal = {Numerische Mathematik},
volume = {139},
number = {3},
pages = {683-707},
year = {2018},
} |
[44] |
Qian, E., Peherstorfer, B., O'Malley, D., Vesselinov, V.V. & Willcox, K. Multifidelity Monte Carlo Estimation of Variance and Sensitivity Indices. SIAM/ASA Journal on Uncertainty Quantification, 6(2):683-706, 2018. [Abstract]Abstract Variance-based sensitivity analysis provides a quantitative measure of how uncertainty in a model input contributes to uncertainty in the model output. Such sensitivity analyses arise in a wide variety of applications and are typically computed using Monte Carlo estimation, but the many samples required for Monte Carlo to be sufficiently accurate can make these analyses intractable when the model is expensive. This work presents a multifidelity approach for estimating sensitivity indices that leverages cheaper low-fidelity models to reduce the cost of sensitivity analysis while retaining accuracy guarantees via recourse to the original, expensive model. This paper develops new multifidelity estimators for variance and for the Sobol' main and total effect sensitivity indices. We discuss strategies for dividing limited computational resources among models and specify a recommended strategy. Results are presented for the Ishigami function and a convection-diffusion-reaction model that demonstrate up to 10x speedups for fixed convergence levels. For the problems tested, the multifidelity approach allows inputs to be definitively ranked in importance when Monte Carlo alone fails to do so. [BibTeX]@article{QPOVW17MFGSA,
title = {Multifidelity Monte Carlo Estimation of Variance and Sensitivity Indices},
author = {Qian, E. and Peherstorfer, B. and O'Malley, D. and Vesselinov, V.V. and Willcox, K.},
journal = {SIAM/ASA Journal on Uncertainty Quantification},
volume = {6},
number = {2},
pages = {683-706},
year = {2018},
} |
[45] |
Baptista, R., Marzouk, Y., Willcox, K. & Peherstorfer, B. Optimal Approximations of Coupling in Multidisciplinary Models. AIAA Journal, 56:2412-2428, 2018. [Abstract]Abstract This paper presents a methodology for identifying important discipline couplings in multicomponent engineering systems. Coupling among disciplines contributes significantly to the computational cost of analyzing a system, and can become particularly burdensome when coupled analyses are embedded within a design or optimization loop. In many cases, disciplines may be weakly coupled, so that some of the coupling or interaction terms can be neglected without significantly impacting the accuracy of the system output. Typical practice derives such approximations in an ad hoc manner using expert opinion and domain experience. This work proposes a new approach that formulates an optimization problem to find a model that optimally balances accuracy of the model outputs with the sparsity of the discipline couplings. An adaptive sequential Monte Carlo sampling-based technique is used to efficiently search the combinatorial model space of different discipline couplings. An algorithm for selecting an optimal model is presented and illustrated in a fire detection satellite model and a turbine engine cycle analysis model. [BibTeX]@article{AIAADecouple18Baptista,
title = {Optimal Approximations of Coupling in Multidisciplinary Models},
author = {Baptista, R. and Marzouk, Y. and Willcox, K. and Peherstorfer, B.},
journal = {AIAA Journal},
volume = {56},
pages = {2412-2428},
year = {2018},
} |
[46] |
Zimmermann, R., Peherstorfer, B. & Willcox, K. Geometric subspace updates with applications to online adaptive nonlinear model reduction. SIAM Journal on Matrix Analysis and Applications, 39(1):234-261, 2018. [Abstract]Abstract In many scientific applications, including model reduction and image processing, subspaces are used as ansatz spaces for the low-dimensional approximation and reconstruction of the state vectors of interest. We introduce a procedure for adapting an existing subspace based on information from the least-squares problem that underlies the approximation problem of interest such that the associated least-squares residual vanishes exactly. The method builds on a Riemmannian optimization procedure on the Grassmann manifold of low-dimensional subspaces, namely the Grassmannian Rank-One Subspace Estimation (GROUSE). We establish for GROUSE a closed-form expression for the residual function along the geodesic descent direction. Specific applications of subspace adaptation are discussed in the context of image processing and model reduction of nonlinear partial differential equation systems. [BibTeX]@article{ZPW17SIMAXManifold,
title = {Geometric subspace updates with applications to online adaptive nonlinear model reduction},
author = {Zimmermann, R. and Peherstorfer, B. and Willcox, K.},
journal = {SIAM Journal on Matrix Analysis and Applications},
volume = {39},
number = {1},
pages = {234-261},
year = {2018},
} |
[47] |
Peherstorfer, B., Willcox, K. & Gunzburger, M. Survey of multifidelity methods in uncertainty propagation, inference, and optimization. SIAM Review, 60(3):550-591, 2018. [Abstract]Abstract In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs and varying fidelities. Typically, a computationally expensive high-fidelity model describes the system with the accuracy required by the current application at hand, while lower-fidelity models are less accurate but computationally cheaper than the high-fidelity model. Outer-loop applications, such as optimization, inference, and uncertainty quantification, require multiple model evaluations at many different inputs, which often leads to computational demands that exceed available resources if only the high-fidelity model is used. This work surveys multifidelity methods that accelerate the solution of outer-loop applications by combining high-fidelity and low-fidelity model evaluations, where the low-fidelity evaluations arise from an explicit low-fidelity model (e.g., a simplified physics approximation, a reduced model, a data-fit surrogate, etc.) that approximates the same output quantity as the high-fidelity model. The overall premise of these multifidelity methods is that low-fidelity models are leveraged for speedup while the high-fidelity model is kept in the loop to establish accuracy and/or convergence guarantees. We categorize multifidelity methods according to three classes of strategies: adaptation, fusion, and filtering. The paper reviews multifidelity methods in the outer-loop contexts of uncertainty propagation, inference, and optimization. [BibTeX]@article{PWG17MultiSurvey,
title = {Survey of multifidelity methods in uncertainty propagation, inference, and optimization},
author = {Peherstorfer, B. and Willcox, K. and Gunzburger, M.},
journal = {SIAM Review},
volume = {60},
number = {3},
pages = {550-591},
year = {2018},
} |
[48] |
Peherstorfer, B., Gugercin, S. & Willcox, K. Data-driven reduced model construction with time-domain Loewner models. SIAM Journal on Scientific Computing, 39(5):A2152-A2178, 2017. [Abstract]Abstract This work presents a data-driven nonintrusive model reduction approach for large-scale time-dependent systems with linear state dependence. Traditionally, model reduction is performed in an intrusive projection-based framework, where the operators of the full model are required either explicitly in an assembled form or implicitly through a routine that returns the action of the operators on a vector. Our nonintrusive approach constructs reduced models directly from trajectories of the inputs and outputs of the full model, without requiring the full-model operators. These trajectories are generated by running a simulation of the full model; our method then infers frequency-response data from these simulated time-domain trajectories and uses the data-driven Loewner framework to derive a reduced model. Only a single time-domain simulation is required to derive a reduced model with the new data-driven nonintrusive approach. We demonstrate our model reduction method on several benchmark examples and a finite element model of a cantilever beam; our approach recovers the classical Loewner reduced models and, for these problems, yields high-quality reduced models despite treating the full model as a black box. [BibTeX]@article{PSW16TLoewner,
title = {Data-driven reduced model construction with time-domain Loewner models},
author = {Peherstorfer, B. and Gugercin, S. and Willcox, K.},
journal = {SIAM Journal on Scientific Computing},
volume = {39},
number = {5},
pages = {A2152-A2178},
year = {2017},
} |
[49] |
Peherstorfer, B., Kramer, B. & Willcox, K. Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models. Journal of Computational Physics, 341:61-75, 2017. [Abstract]Abstract In failure probability estimation, importance sampling constructs a biasing distribution that targets the failure event such that a small number of model evaluations is sufficient to achieve a Monte Carlo estimate of the failure probability with an acceptable accuracy; however, the construction of the biasing distribution often requires a large number of model evaluations, which can become computationally expensive. We present a mixed multifidelity importance sampling (MMFIS) approach that leverages computationally cheap but erroneous surrogate models for the construction of the biasing distribution and that uses the original high-fidelity model to guarantee unbiased estimates of the failure probability. The key property of our MMFIS estimator is that it can leverage multiple surrogate models for the construction of the biasing distribution, instead of a single surrogate model alone. We show that our MMFIS estimator has a mean-squared error that is up to a constant lower than the mean-squared errors of the corresponding estimators that uses any of the given surrogate models alone---even in settings where no information about the approximation qualities of the surrogate models is available. In particular, our MMFIS approach avoids the problem of selecting the surrogate model that leads to the estimator with the lowest mean-squared error, which is challenging if the approximation quality of the surrogate models is unknown. We demonstrate our MMFIS approach on numerical examples, where we achieve orders of magnitude speedups compared to using the high-fidelity model only. [BibTeX]@article{PKW16MixedMFIS,
title = {Combining multiple surrogate models to accelerate failure probability estimation with expensive high-fidelity models},
author = {Peherstorfer, B. and Kramer, B. and Willcox, K.},
journal = {Journal of Computational Physics},
volume = {341},
pages = {61-75},
year = {2017},
} |
[50] |
Kramer, B., Peherstorfer, B. & Willcox, K. Feedback Control for Systems with Uncertain Parameters Using Online-Adaptive Reduced Models. SIAM Journal on Applied Dynamical Systems, 16(3):1563-1586, 2017. [Abstract]Abstract We consider control and stabilization for large-scale dynamical systems with uncertain, time-varying parameters. The time-critical task of controlling a dynamical system poses major challenges: Using large-scale models is prohibitive, and accurately inferring parameters can be expensive, too. We address both problems by proposing an offline-online strategy for controlling systems with time-varying parameters. During the offline phase, we use a high-fidelity model to compute a library of optimal feedback controller gains over a sampled set of parameter values. Then, during the online phase, in which the uncertain parameter changes over time, we learn a reduced-order model from system data. The learned reduced-order model is employed within an optimization routine to update the feedback control throughout the online phase. Since the system data naturally reflects the uncertain parameter, the data-driven updating of the controller gains is achieved without an explicit parameter estimation step. We consider two numerical test problems in the form of partial differential equations: a convection--diffusion system, and a model for flow through a porous medium. We demonstrate on those models that the proposed method successfully stabilizes the system model in the presence of process noise. [BibTeX]@article{KPW16ControlAdaptROM,
title = {Feedback Control for Systems with Uncertain Parameters Using Online-Adaptive Reduced Models},
author = {Kramer, B. and Peherstorfer, B. and Willcox, K.},
journal = {SIAM Journal on Applied Dynamical Systems},
volume = {16},
number = {3},
pages = {1563-1586},
year = {2017},
} |
[51] |
Peherstorfer, B., Willcox, K. & Gunzburger, M. Optimal model management for multifidelity Monte Carlo estimation. SIAM Journal on Scientific Computing, 38(5):A3163-A3194, 2016. [Abstract]Abstract This work presents an optimal model management strategy that exploits multifidelity surrogate models to accelerate the estimation of statistics of outputs of computationally expensive high-fidelity models. Existing acceleration methods typically exploit a multilevel hierarchy of surrogate models that follow a known rate of error decay and computational costs; however, a general collection of surrogate models, which may include projection-based reduced models, data-fit models, support vector machines, and simplified-physics models, does not necessarily give rise to such a hierarchy. Our multifidelity approach provides a framework to combine an arbitrary number of surrogate models of any type. Instead of relying on error and cost rates, an optimization problem balances the number of model evaluations across the high-fidelity and surrogate models with respect to error and costs. We show that a unique analytic solution of the model management optimization problem exists under mild conditions on the models. Our multifidelity method makes occasional recourse to the high-fidelity model; in doing so it provides an unbiased estimator of the statistics of the high-fidelity model, even in the absence of error bounds and error estimators for the surrogate models. Numerical experiments with linear and nonlinear examples show that speedups by orders of magnitude are obtained compared to Monte Carlo estimation that invokes a single model only. [BibTeX]@article{Peherstorfer15Multi,
title = {Optimal model management for multifidelity Monte Carlo estimation},
author = {Peherstorfer, B. and Willcox, K. and Gunzburger, M.},
journal = {SIAM Journal on Scientific Computing},
volume = {38},
number = {5},
pages = {A3163-A3194},
year = {2016},
} |
[52] |
Peherstorfer, B. & Willcox, K. Data-driven operator inference for nonintrusive projection-based model reduction. Computer Methods in Applied Mechanics and Engineering, 306:196-215, 2016. [Abstract]Abstract This work presents a nonintrusive projection-based model reduction approach for full models based on time-dependent partial differential equations. Projection-based model reduction constructs the operators of a reduced model by projecting the equations of the full model onto a reduced space. Traditionally, this projection is intrusive, which means that the full-model operators are required either explicitly in an assembled form or implicitly through a routine that returns the action of the operators on a given vector; however, in many situations the full model is given as a black box that computes trajectories of the full-model states and outputs for given initial conditions and inputs, but does not provide the full-model operators. Our nonintrusive operator inference approach infers approximations of the reduced operators from the initial conditions, inputs, trajectories of the states, and outputs of the full model, without requiring the full-model operators. Our operator inference is applicable to full models that are linear in the state or have a low-order polynomial nonlinear term. The inferred operators are the solution of a least-squares problem and converge, with sufficient state trajectory data, in the Frobenius norm to the reduced operators that would be obtained via an intrusive projection of the full-model operators. Our numerical results demonstrate operator inference on a linear climate model and on a tubular reactor model with a polynomial nonlinear term of third order. [BibTeX]@article{Peherstorfer16DataDriven,
title = {Data-driven operator inference for nonintrusive projection-based model reduction},
author = {Peherstorfer, B. and Willcox, K.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {306},
pages = {196-215},
year = {2016},
} |
[53] |
Peherstorfer, B. & Willcox, K. Dynamic data-driven model reduction: Adapting reduced models from incomplete data. Advanced Modeling and Simulation in Engineering Sciences, 3(11), 2016. [Abstract]Abstract This work presents a data-driven online adaptive model reduction approach for systems that undergo dynamic changes. Classical model reduction constructs a reduced model of a large-scale system in an offline phase and then keeps the reduced model unchanged during the evaluations in an online phase; however, if the system changes online, the reduced model may fail to predict the behavior of the changed system. Rebuilding the reduced model from scratch is often too expensive in time-critical and real-time environments. We introduce a dynamic data-driven adaptation approach that adapts the reduced model from incomplete sensor data obtained from the system during the online computations. The updates to the reduced models are derived directly from the incomplete data, without recourse to the full model. Our adaptivity approach approximates the missing values in the incomplete sensor data with gappy proper orthogonal decomposition. These approximate data are then used to derive low-rank updates to the reduced basis and the reduced operators. In our numerical examples, incomplete data with 30-40 percent known values are sufficient to recover the reduced model that would be obtained via rebuilding from scratch. [BibTeX]@article{Peherstorfer16AdaptROM,
title = {Dynamic data-driven model reduction: Adapting reduced models from incomplete data},
author = {Peherstorfer, B. and Willcox, K.},
journal = {Advanced Modeling and Simulation in Engineering Sciences},
volume = {3},
number = {11},
year = {2016},
} |
[54] |
Peherstorfer, B., Cui, T., Marzouk, Y. & Willcox, K. Multifidelity Importance Sampling. Computer Methods in Applied Mechanics and Engineering, 300:490-509, 2016. [Abstract]Abstract Estimating statistics of model outputs with the Monte Carlo method often requires a large number of model evaluations. This leads to long runtimes if the model is expensive to evaluate. Importance sampling is one approach that can lead to a reduction in the number of model evaluations. Importance sampling uses a biasing distribution to sample the model more efficiently, but generating such a biasing distribution can be difficult and usually also requires model evaluations. A different strategy to speed up Monte Carlo sampling is to replace the computationally expensive high-fidelity model with a computationally cheap surrogate model; however, because the surrogate model outputs are only approximations of the high-fidelity model outputs, the estimate obtained using a surrogate model is in general biased with respect to the estimate obtained using the high-fidelity model. We introduce a multifidelity importance sampling (MFIS) method, which combines evaluations of both the high-fidelity and a surrogate model. It uses a surrogate model to facilitate the construction of the biasing distribution, but relies on a small number of evaluations of the high-fidelity model to derive an unbiased estimate of the statistics of interest. We prove that the MFIS estimate is unbiased even in the absence of accuracy guarantees on the surrogate model itself. The MFIS method can be used with any type of surrogate model, such as projection-based reduced-order models and data-fit models. Furthermore, the MFIS method is applicable to black-box models, i.e., where only inputs and the corresponding outputs of the high-fidelity and the surrogate model are available but not the details of the models themselves. We demonstrate on nonlinear and time-dependent problems that our MFIS method achieves speedups of up to several orders of magnitude compared to Monte Carlo with importance sampling that uses the high-fidelity model only. [BibTeX]@article{Peherstorfer16MFIS,
title = {Multifidelity Importance Sampling},
author = {Peherstorfer, B. and Cui, T. and Marzouk, Y. and Willcox, K.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {300},
pages = {490-509},
year = {2016},
} |
[55] |
Peherstorfer, B. & Willcox, K. Online Adaptive Model Reduction for Nonlinear Systems via Low-Rank Updates. SIAM Journal on Scientific Computing, 37(4):A2123-A2150, 2015. [Abstract]Abstract This work presents a nonlinear model reduction approach for systems of equations stemming from the discretization of partial differential equations with nonlinear terms. Our approach constructs a reduced system with proper orthogonal decomposition and the discrete empirical interpolation method (DEIM); however, whereas classical DEIM derives a linear approximation of the nonlinear terms in a static DEIM space generated in an offline phase, our method adapts the DEIM space as the online calculation proceeds and thus provides a nonlinear approximation. The online adaptation uses new data to produce a reduced system that accurately approximates behavior not anticipated in the offline phase. These online data are obtained by querying the full-order system during the online phase, but only at a few selected components to guarantee a computationally efficient adaptation. Compared to the classical static approach, our online adaptive and nonlinear model reduction approach achieves accuracy improvements of up to three orders of magnitude in our numerical experiments with time-dependent and steady-state nonlinear problems. The examples also demonstrate that through adaptivity, our reduced systems provide valid approximations of the full-order systems outside of the parameter domains for which they were initially built in the offline phase. [BibTeX]@article{Peherstorfer15aDEIM,
title = {Online Adaptive Model Reduction for Nonlinear Systems via Low-Rank Updates},
author = {Peherstorfer, B. and Willcox, K.},
journal = {SIAM Journal on Scientific Computing},
volume = {37},
number = {4},
pages = {A2123-A2150},
year = {2015},
} |
[56] |
Peherstorfer, B., Gómez, P. & Bungartz, H.J. Reduced Models for Sparse Grid Discretizations of the Multi-Asset Black-Scholes Equation. Advances in Computational Mathematics, 41(5):1365-1389, 2015. [Abstract]Abstract This work presents reduced models for pricing basket options with the Black-Scholes and the Heston model. Basket options lead to multi-dimensional partial differential equations (PDEs) that quickly become computationally infeasible to discretize on full tensor grids. We therefore rely on sparse grid discretizations of the PDEs, which allow us to cope with the curse of dimensionality to some extent. We then derive reduced models with proper orthogonal decomposition. Our numerical results with the Black-Scholes model show that sufficiently accurate results are achieved while gaining speedups between 80 and 160 compared to the high-fidelity sparse grid model for 2-, 3-, and 4-asset options. For the Heston model, results are presented for a single-asset option that leads to a two-dimensional pricing problem, where we achieve significant speedups with our model reduction approach based on high-fidelity sparse grid models. [BibTeX]@article{pehersto15BlackScholes,
title = {Reduced Models for Sparse Grid Discretizations of the Multi-Asset Black-Scholes Equation},
author = {Peherstorfer, B. and Gómez, P. and Bungartz, H.J.},
journal = {Advances in Computational Mathematics},
volume = {41},
number = {5},
pages = {1365-1389},
year = {2015},
} |
[57] |
Peherstorfer, B. & Willcox, K. Dynamic Data-Driven Reduced-Order Models. Computer Methods in Applied Mechanics and Engineering, 291:21-41, 2015. [Abstract]Abstract Data-driven model reduction constructs reduced-order models of large-scale systems by learning the system response characteristics from data. Existing methods build the reduced-order models in a computationally expensive offline phase and then use them in an online phase to provide fast predictions of the system. In cases where the underlying system properties are not static but undergo dynamic changes, repeating the offline phase after each system change to rebuild the reduced-order model from scratch forfeits the savings gained in the online phase. This paper proposes dynamic reduced-order models that break with this classical but rigid approach. Dynamic reduced-order models exploit the opportunity presented by dynamic sensor data and adaptively incorporate sensor data during the online phase. This permits online adaptation to system changes while circumventing the expensive rebuilding of the model. A computationally cheap adaptation is achieved by constructing low-rank updates to the reduced operators. With these updates and with sufficient and accurate data, our approach recovers the same model that would be obtained by rebuilding from scratch. We demonstrate dynamic reduced-order models on a structural assessment example in the context of real-time decision making. We consider a plate in bending where the dynamic reduced-order model quickly adapts to changes in structural properties and achieves speedups of four orders of magnitude compared to rebuilding a model from scratch. [BibTeX]@article{pehersto15dynamic,
title = {Dynamic Data-Driven Reduced-Order Models},
author = {Peherstorfer, B. and Willcox, K.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {291},
pages = {21-41},
year = {2015},
} |
[58] |
Peherstorfer, B., Zimmer, S., Zenger, C. & Bungartz, H.J. A Multigrid Method for Adaptive Sparse Grids. SIAM Journal on Scientific Computing, 37(5):S51-S70, 2015. [Abstract]Abstract Sparse grids have become an important tool to reduce the number of degrees of freedom of discretizations of moderately high-dimensional partial differential equations; however, the reduction in degrees of freedom comes at the cost of an almost dense and unconventionally structured system of linear equations. To guarantee overall efficiency of the sparse grid approach, special linear solvers are required. We present a multigrid method that exploits the sparse grid structure to achieve an optimal runtime that scales linearly with the number of sparse grid points. Our approach is based on a novel decomposition of the right-hand sides of the coarse grid equations that leads to a reformulation in so-called auxiliary coefficients. With these auxiliary coefficients, the right-hand sides can be represented in a nodal point basis on low-dimensional full grids. Our proposed multigrid method directly operates in this auxiliary coefficient representation, circumventing most of the computationally cumbersome sparse grid structure. Numerical results on nonadaptive and spatially adaptive sparse grids confirm that the runtime of our method scales linearly with the number of sparse grid points and they indicate that the obtained convergence factors are bounded independently of the mesh width. [BibTeX]@article{peherstorfer15htmg,
title = {A Multigrid Method for Adaptive Sparse Grids},
author = {Peherstorfer, B. and Zimmer, S. and Zenger, C. and Bungartz, H.J.},
journal = {SIAM Journal on Scientific Computing},
volume = {37},
number = {5},
pages = {S51--S70},
year = {2015},
} |
[59] |
Peherstorfer, B., Pflüger, D. & Bungartz, H.J. Density Estimation with Adaptive Sparse Grids for Large Data Sets. In SIAM Data Mining 2014, SIAM, 2014. [Abstract]Abstract Nonparametric density estimation is a fundamental problem of statistics and data mining. Even though kernel density estimation is the most widely used method, its performance highly depends on the choice of the kernel bandwidth, and it can become computationally expensive for large data sets. We present an adaptive sparse-grid-based density estimation method which discretizes the estimated density function on basis functions centered at grid points rather than on kernels centered at the data points. Thus, the costs of evaluating the estimated density function are independent from the number of data points. We give details on how to estimate density functions on sparse grids and develop a cross validation technique for the parameter selection. We show numerical results to confirm that our sparse-grid-based method is well-suited for large data sets, and, finally, employ our method for the classification of astronomical objects to demonstrate that it is competitive to current kernel-based density estimation approaches with respect to classification accuracy and runtime [BibTeX]@inproceedings{Peherstorfer14Density,
title = {Density Estimation with Adaptive Sparse Grids for Large Data Sets},
author = {Peherstorfer, B. and Pflüger, D. and Bungartz, H.J.},
year = {2014},
booktitle = {SIAM Data Mining 2014},
publisher = {SIAM},
} |
[60] |
Peherstorfer, B., Butnaru, D., Willcox, K. & Bungartz, H.J. Localized Discrete Empirical Interpolation Method. SIAM Journal on Scientific Computing, 36(1):A168-A192, 2014. [Abstract]Abstract This paper presents a new approach to construct more efficient reduced-order models for nonlinear partial differential equations with proper orthogonal decomposition and the discrete empirical interpolation method (DEIM). Whereas DEIM projects the nonlinear term onto one global subspace, our localized discrete empirical interpolation method (LDEIM) computes several local subspaces, each tailored to a particular region of characteristic system behavior. Then, depending on the current state of the system, LDEIM selects an appropriate local subspace for the approximation of the nonlinear term. In this way, the dimensions of the local DEIM subspaces, and thus the computational costs, remain low even though the system might exhibit a wide range of behaviors as it passes through different regimes. LDEIM uses machine learning methods in the offline computational phase to discover these regions via clustering. Local DEIM approximations are then computed for each cluster. In the online computational phase, machine-learning-based classification procedures select one of these local subspaces adaptively as the computation proceeds. The classification can be achieved using either the system parameters or a low-dimensional representation of the current state of the system obtained via feature extraction. The LDEIM approach is demonstrated for a reacting flow example of an H_2-Air flame. In this example, where the system state has a strong nonlinear dependence on the parameters, the LDEIM provides speedups of two orders of magnitude over standard DEIM. [BibTeX]@article{peherstorfer13localized,
title = {Localized Discrete Empirical Interpolation Method},
author = {Peherstorfer, B. and Butnaru, D. and Willcox, K. and Bungartz, H.J.},
journal = {SIAM Journal on Scientific Computing},
volume = {36},
number = {1},
pages = {A168-A192},
year = {2014},
} |
[61] |
Peherstorfer, B., Kowitz, C., Pflüger, D. & Bungartz, H.J. Selected Recent Applications of Sparse Grids. Numerical Mathematics: Theory, Methods and Applications, 8(1):47-77, 2014. [Abstract]Abstract Sparse grids have become a versatile tool for a vast range of applications reaching from interpolation and numerical quadrature to data-driven problems and uncertainty quantification. We review four selected real-world applications of sparse grids: financial product pricing with the Black-Scholes model, interactive exploration of simulation data with sparse-grid-based surrogate models, analysis of simulation data through sparse grid data mining methods, and stability investigations of plasma turbulence simulations. [BibTeX]@article{Peherstorfer14SGReview,
title = {Selected Recent Applications of Sparse Grids},
author = {Peherstorfer, B. and Kowitz, C. and Pflüger, D. and Bungartz, H.J.},
journal = {Numerical Mathematics: Theory, Methods and Applications},
volume = {8},
number = {1},
pages = {47-77},
year = {2014},
} |
[62] |
Pflüger, D., Peherstorfer, B. & Bungartz, H.J. Spatially adaptive sparse grids for high-dimensional data-driven problems. Journal of Complexity, 26(5):508-522, 2010. [Abstract]Abstract Sparse grids allow one to employ grid-based discretization methods in data-driven problems. We present an extension of the classical sparse grid approach that allows us to tackle high-dimensional problems by spatially adaptive refinement, modified ansatz functions, and efficient regularization techniques. The competitiveness of this method is shown for typical benchmark problems with up to 166 dimensions for classification in data mining, pointing out properties of sparse grids in this context. To gain insight into the adaptive refinement and to examine the scope for further improvements, the approximation of non-smooth indicator functions with adaptive sparse grids has been studied as a model problem. As an example for an improved adaptive grid refinement, we present results for an edge-detection strategy. [BibTeX]@article{pflueger10spatially,
title = {Spatially adaptive sparse grids for high-dimensional data-driven problems},
author = {Pflüger, D. and Peherstorfer, B. and Bungartz, H.J.},
journal = {Journal of Complexity},
volume = {26},
number = {5},
pages = {508-522},
year = {2010},
} |