[1] | Depth separation for reduced deep networks in nonlinear model reduction: Distilling shock waves in nonlinear hyperbolic problems. arXiv:2007.13977, 2020. [ Abstract] [BibTeX] |
[2] | Sampling low-dimensional Markovian dynamics for pre-asymptotically recovering reduced models from data with operator inference. SIAM Journal on Scientific Computing, 42:A3489-A3515, 2020. [ Abstract] [BibTeX] |
[3] | Lift & Learn: Physics-informed machine learning for large-scale nonlinear dynamical systems. Physica D: Nonlinear Phenomena, Volume 406, 2020. [ Abstract] [BibTeX] |
[4] | Data-driven operator inference for nonintrusive projection-based model reduction. Computer Methods in Applied Mechanics and Engineering, 306:196-215, 2016. [ Abstract] [BibTeX] |
[1] | Model reduction for transport-dominated problems via online adaptive bases and adaptive sampling. SIAM Journal on Scientific Computing, 42:A2803-A2836, 2020. [ Abstract] [BibTeX] |
[2] | Manifold approximations via transported subspaces: Model reduction for transport-dominated problems. arXiv:1912.13024, 2019. [ Abstract] [BibTeX] |
[3] | Stability of discrete empirical interpolation and gappy proper orthogonal decomposition with randomized and deterministic sampling points. SIAM Journal on Scientific Computing, 42:A2837-A2864, 2020. [ Abstract] [BibTeX] |
[1] | Survey of multifidelity methods in uncertainty propagation, inference, and optimization. SIAM Review, 60(3):550-591, 2018. [ Abstract] [BibTeX] |
[2] | A transport-based multifidelity preconditioner for Markov chain Monte Carlo. Advances in Computational Mathematics, 45:2321-2348, 2019. [ Abstract] [BibTeX] |
[3] | Multifidelity Monte Carlo estimation with adaptive low-fidelity models. SIAM/ASA Journal on Uncertainty Quantification, 7:579-603, 2019. [ Abstract] [BibTeX] |
[1] | Survey of multifidelity methods in uncertainty propagation, inference, and optimization. SIAM Review, 60(3):550-591, 2018. [ Abstract] [BibTeX] |
[2] | Optimal model management for multifidelity Monte Carlo estimation. SIAM Journal on Scientific Computing, 38(5):A3163-A3194, 2016. [ Abstract] [BibTeX] |
[3] | Convergence analysis of multifidelity Monte Carlo estimation. Numerische Mathematik, 139(3):683-707, 2018. [ Abstract] [BibTeX] |
[4] | Multifidelity preconditioning of the cross-entropy method for rare event simulation and failure probability estimation. SIAM/ASA Journal on Uncertainty Quantification, 6(2):737-761, 2018. [ Abstract] [BibTeX] |
Sampling low-dimensional Markovian dynamics for pre-asymptotically recovering reduced models from data with operator inference. SIAM Journal on Scientific Computing, 42:A3489-A3515, 2020. [ BibTeX] Abstract This work introduces a method for learning low-dimensional models from data of high-dimensional black-box dynamical systems. The novelty is that the learned models are exactly the reduced models that are traditionally constructed with model reduction techniques that require full knowledge of governing equations and operators of the high-dimensional systems. Thus, the learned models are guaranteed to inherit the well-studied properties of reduced models from traditional model reduction. The key ingredient is a new data sampling scheme to obtain re-projected trajectories of high-dimensional systems that correspond to Markovian dynamics in low-dimensional subspaces. The exact recovery of reduced models from these re-projected trajectories is guaranteed pre-asymptotically under certain conditions for finite amounts of data and for a large class of systems with polynomial nonlinear terms. Numerical results demonstrate that the low-dimensional models learned with the proposed approach match reduced models from traditional model reduction up to numerical errors in practice. The numerical results further indicate that low-dimensional models fitted to re-projected trajectories are predictive even in situations where models fitted to trajectories without re-projection are inaccurate and unstable |
Stability of discrete empirical interpolation and gappy proper orthogonal decomposition with randomized and deterministic sampling points. SIAM Journal on Scientific Computing, 42:A2837-A2864, 2020. [ BibTeX] Abstract This work investigates the stability of (discrete) empirical interpolation for nonlinear model reduction and state field approximation from measurements. Empirical interpolation derives approximations from a few samples (measurements) via interpolation in low-dimensional spaces. It has been observed that empirical interpolation can become unstable if the samples are perturbed due to, e.g., noise, turbulence, and numerical inaccuracies. The main contribution of this work is a probabilistic analysis that shows that stable approximations are obtained if samples are randomized and if more samples than dimensions of the low-dimensional spaces are used. Oversampling, i.e., taking more sampling points than dimensions of the low-dimensional spaces, leads to approximations via regression and is known under the name of gappy proper orthogonal decomposition. Building on the insights of the probabilistic analysis, a deterministic sampling strategy is presented that aims to achieve lower approximation errors with fewer points than randomized sampling by taking information about the low-dimensional spaces into account. Numerical results of reconstructing velocity fields from noisy measurements of combustion processes and model reduction in the presence of noise demonstrate the instability of empirical interpolation and the stability of gappy proper orthogonal decomposition with oversampling. |
Multifidelity preconditioning of the cross-entropy method for rare event simulation and failure probability estimation. SIAM/ASA Journal on Uncertainty Quantification, 6(2):737-761, 2018. [ BibTeX] Abstract Accurately estimating rare event probabilities with Monte Carlo can become costly if for each sample a computationally expensive high-fidelity model evaluation is necessary to approximate the system response. Variance reduction with importance sampling significantly reduces the number of required samples if a suitable biasing density is used. This work introduces a multifidelity approach that leverages a hierarchy of low-cost surrogate models to efficiently construct biasing densities for importance sampling. Our multifidelity approach is based on the cross-entropy method that derives a biasing density via an optimization problem. We approximate the solution of the optimization problem at each level of the surrogate-model hierarchy, reusing the densities found on the previous levels to precondition the optimization problem on the subsequent levels. With the preconditioning, an accurate approximation of the solution of the optimization problem at each level can be obtained from a few model evaluations only. In particular, at the highest level, only few evaluations of the computationally expensive high-fidelity model are necessary. Our numerical results demonstrate that our multifidelity approach achieves speedups of several orders of magnitude in a thermal and a reacting-flow example compared to the single-fidelity cross-entropy method that uses a single model alone. |
Optimal model management for multifidelity Monte Carlo estimation. SIAM Journal on Scientific Computing, 38(5):A3163-A3194, 2016. [ BibTeX] Abstract This work presents an optimal model management strategy that exploits multifidelity surrogate models to accelerate the estimation of statistics of outputs of computationally expensive high-fidelity models. Existing acceleration methods typically exploit a multilevel hierarchy of surrogate models that follow a known rate of error decay and computational costs; however, a general collection of surrogate models, which may include projection-based reduced models, data-fit models, support vector machines, and simplified-physics models, does not necessarily give rise to such a hierarchy. Our multifidelity approach provides a framework to combine an arbitrary number of surrogate models of any type. Instead of relying on error and cost rates, an optimization problem balances the number of model evaluations across the high-fidelity and surrogate models with respect to error and costs. We show that a unique analytic solution of the model management optimization problem exists under mild conditions on the models. Our multifidelity method makes occasional recourse to the high-fidelity model; in doing so it provides an unbiased estimator of the statistics of the high-fidelity model, even in the absence of error bounds and error estimators for the surrogate models. Numerical experiments with linear and nonlinear examples show that speedups by orders of magnitude are obtained compared to Monte Carlo estimation that invokes a single model only. |
Online Adaptive Model Reduction for Nonlinear Systems via Low-Rank Updates. SIAM Journal on Scientific Computing, 37(4):A2123-A2150, 2015. [ BibTeX] Abstract This work presents a nonlinear model reduction approach for systems of equations stemming from the discretization of partial differential equations with nonlinear terms. Our approach constructs a reduced system with proper orthogonal decomposition and the discrete empirical interpolation method (DEIM); however, whereas classical DEIM derives a linear approximation of the nonlinear terms in a static DEIM space generated in an offline phase, our method adapts the DEIM space as the online calculation proceeds and thus provides a nonlinear approximation. The online adaptation uses new data to produce a reduced system that accurately approximates behavior not anticipated in the offline phase. These online data are obtained by querying the full-order system during the online phase, but only at a few selected components to guarantee a computationally efficient adaptation. Compared to the classical static approach, our online adaptive and nonlinear model reduction approach achieves accuracy improvements of up to three orders of magnitude in our numerical experiments with time-dependent and steady-state nonlinear problems. The examples also demonstrate that through adaptivity, our reduced systems provide valid approximations of the full-order systems outside of the parameter domains for which they were initially built in the offline phase. |