[1] | Neural Galerkin Scheme with Active Learning for High-Dimensional Evolution Equations. Journal of Computational Physics, 2023. [ Abstract] [BibTeX] |
[2] | Randomized Sparse Neural Galerkin Schemes for Solving Evolution Equations with Deep Networks. NeurIPS 2023 (spotlight). [ Abstract] [BibTeX] |
[3] | Context-aware controller inference for stabilizing dynamical systems from scarce data. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2023. (accepted). [ Abstract] [BibTeX] |
[4] | Lift & Learn: Physics-informed machine learning for large-scale nonlinear dynamical systems. Physica D: Nonlinear Phenomena, Volume 406, 2020. [ Abstract] [BibTeX] |
[1] | Breaking the Kolmogorov Barrier with Nonlinear Model Reduction. Notices of the American Mathematical Society, 69:725-733, 2022. [ Abstract] [BibTeX] |
[2] | Model reduction for transport-dominated problems via online adaptive bases and adaptive sampling. SIAM Journal on Scientific Computing, 42:A2803-A2836, 2020. [ Abstract] [BibTeX] |
[3] | Lookahead data-gathering strategies for online adaptive model reduction of transport-dominated problems. Chaos: An Interdisciplinary Journal of Nonlinear Science, 2023. (accepted). [ Abstract] [BibTeX] |
[4] | Stability of discrete empirical interpolation and gappy proper orthogonal decomposition with randomized and deterministic sampling points. SIAM Journal on Scientific Computing, 42:A2837-A2864, 2020. [ Abstract] [BibTeX] |
[1] | Survey of multifidelity methods in uncertainty propagation, inference, and optimization. SIAM Review, 60(3):550-591, 2018. [ Abstract] [BibTeX] |
[2] | Multi-fidelity covariance estimation in the log-Euclidean geometry. International Conference on Machine Learning (ICML), 2023. [ Abstract] [BibTeX] |
[3] | Multifidelity Monte Carlo estimation with adaptive low-fidelity models. SIAM/ASA Journal on Uncertainty Quantification, 7:579-603, 2019. [ Abstract] [BibTeX] |
[1] | Survey of multifidelity methods in uncertainty propagation, inference, and optimization. SIAM Review, 60(3):550-591, 2018. [ Abstract] [BibTeX] |
[2] | Optimal model management for multifidelity Monte Carlo estimation. SIAM Journal on Scientific Computing, 38(5):A3163-A3194, 2016. [ Abstract] [BibTeX] |
[3] | Convergence analysis of multifidelity Monte Carlo estimation. Numerische Mathematik, 139(3):683-707, 2018. [ Abstract] [BibTeX] |
[4] | Multifidelity preconditioning of the cross-entropy method for rare event simulation and failure probability estimation. SIAM/ASA Journal on Uncertainty Quantification, 6(2):737-761, 2018. [ Abstract] [BibTeX] |
Neural Galerkin Scheme with Active Learning for High-Dimensional Evolution Equations. Journal of Computational Physics, 2023. [ BibTeX] Abstract Machine learning methods have been shown to give accurate predictions in high dimensions provided that sufficient training data are available. Yet, many interesting questions in science and engineering involve situations where initially no data are available and the principal aim is to gather insights from a known model. Here we consider this problem in the context of systems whose evolution can be described by partial differential equations (PDEs). We use deep learning to solve these equations by generating data on-the-fly when and where they are needed, without prior information about the solution. The proposed Neural Galerkin schemes derive nonlinear dynamical equations for the network weights by minimization of the residual of the time derivative of the solution, and solve these equations using standard integrators for initial value problems. The sequential learning of the weights over time allows for adaptive collection of new input data for residual estimation. This step uses importance sampling informed by the current state of the solution, in contrast with other machine learning methods for PDEs that optimize the network parameters globally in time. This active form of data acquisition is essential to enable the approximation power of the neural networks and to break the curse of dimensionality faced by non-adaptative learning strategies. The applicability of the method is illustrated on several numerical examples involving high-dimensional PDEs, including advection equations with many variables, as well as Fokker-Planck equations for systems with several interacting particles. |
Context-aware controller inference for stabilizing dynamical systems from scarce data. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2023. (accepted). [ BibTeX] Abstract This work introduces a data-driven control approach for stabilizing high-dimensional dynamical systems from scarce data. The proposed context-aware controller inference approach is based on the observation that controllers need to act locally only on the unstable dynamics to stabilize systems. This means it is sufficient to learn the unstable dynamics alone, which are typically confined to much lower dimensional spaces than the high-dimensional state spaces of all system dynamics and thus few data samples are sufficient to identify them. Numerical experiments demonstrate that context-aware controller inference learns stabilizing controllers from orders of magnitude fewer data samples than traditional data-driven control techniques and variants of reinforcement learning. The experiments further show that the low data requirements of context-aware controller inference are especially beneficial in data-scarce engineering problems with complex physics, for which learning complete system dynamics is often intractable in terms of data and training costs. |
Multi-fidelity covariance estimation in the log-Euclidean geometry. International Conference on Machine Learning (ICML), 2023. [ BibTeX] Abstract We introduce a multi-fidelity estimator of covariance matrices that employs the log-Euclidean geometry of the symmetric positive-definite manifold. The estimator fuses samples from a hierarchy of data sources of differing fidelities and costs for variance reduction while guaranteeing definiteness, in contrast with previous approaches. The new estimator makes covariance estimation tractable in applications where simulation or data collection is expensive; to that end, we develop an optimal sample allocation scheme that minimizes the mean-squared error of the estimator given a fixed budget. Guaranteed definiteness is crucial to metric learning, data assimilation, and other downstream tasks. Evaluations of our approach using data from physical applications (heat conduction, fluid dynamics) demonstrate more accurate metric learning and speedups of more than one order of magnitude compared to benchmarks. |
Model reduction for transport-dominated problems via online adaptive bases and adaptive sampling. SIAM Journal on Scientific Computing, 42:A2803-A2836, 2020. [ BibTeX] Abstract This work presents a model reduction approach for problems with coherent structures that propagate over time such as convection-dominated flows and wave-type phenomena. Traditional model reduction methods have difficulties with these transport-dominated problems because propagating coherent structures typically introduce high-dimensional features that require high-dimensional approximation spaces. The approach proposed in this work exploits the locality in space and time of propagating coherent structures to derive efficient reduced models. First, full-model solutions are approximated locally in time via local reduced spaces that are adapted with basis updates during time stepping. The basis updates are derived from querying the full model at a few selected spatial coordinates. Second, the locality in space of the coherent structures is exploited via an adaptive sampling scheme that selects at which components to query the full model for computing the basis updates. Our analysis shows that, in probability, the more local the coherent structure is in space, the fewer full-model samples are required to adapt the reduced basis with the proposed adaptive sampling scheme. Numerical results on benchmark examples with interacting wave-type structures and time-varying transport speeds and on a model combustor of a single-element rocket engine demonstrate the wide applicability of our approach and the significant runtime speedups compared to full models and traditional reduced models. |
Sampling low-dimensional Markovian dynamics for pre-asymptotically recovering reduced models from data with operator inference. SIAM Journal on Scientific Computing, 42:A3489-A3515, 2020. [ BibTeX] Abstract This work introduces a method for learning low-dimensional models from data of high-dimensional black-box dynamical systems. The novelty is that the learned models are exactly the reduced models that are traditionally constructed with model reduction techniques that require full knowledge of governing equations and operators of the high-dimensional systems. Thus, the learned models are guaranteed to inherit the well-studied properties of reduced models from traditional model reduction. The key ingredient is a new data sampling scheme to obtain re-projected trajectories of high-dimensional systems that correspond to Markovian dynamics in low-dimensional subspaces. The exact recovery of reduced models from these re-projected trajectories is guaranteed pre-asymptotically under certain conditions for finite amounts of data and for a large class of systems with polynomial nonlinear terms. Numerical results demonstrate that the low-dimensional models learned with the proposed approach match reduced models from traditional model reduction up to numerical errors in practice. The numerical results further indicate that low-dimensional models fitted to re-projected trajectories are predictive even in situations where models fitted to trajectories without re-projection are inaccurate and unstable. |