Selected journal publications
[1] 
Peherstorfer, B. & Marzouk, Y. A transportbased multifidelity preconditioner for Markov chain Monte Carlo. Advances in Computational Mathematics, 45:23212348, 2019. [Abstract]Abstract Markov chain Monte Carlo (MCMC) sampling of posterior distributions arising in Bayesian inverse problems is challenging when evaluations of the forward model are computationally expensive. Replacing the forward model with a lowcost, lowfidelity model often significantly reduces computational cost; however, employing a lowfidelity model alone means that the stationary distribution of the MCMC chain is the posterior distribution corresponding to the lowfidelity model, rather than the original posterior distribution corresponding to the highfidelity model. We propose a multifidelity approach that combines, rather than replaces, the highfidelity model with a lowfidelity model. First, the lowfidelity model is used to construct a transport map that deterministically couples a reference Gaussian distribution with an approximation of the lowfidelity posterior. Then, the highfidelity posterior distribution is explored using a nonGaussian proposal distribution derived from the transport map. This multifidelity preconditioned MCMC approach seeks efficient sampling via a proposal that is explicitly tailored to the posterior at hand and that is constructed efficiently with the lowfidelity model. By relying on the lowfidelity model only to construct the proposal distribution, our approach guarantees that the stationary distribution of the MCMC chain is the highfidelity posterior. In our numerical examples, our multifidelity approach achieves significant speedups compared to singlefidelity MCMC sampling methods. [BibTeX]@article{PM18MultiTM,
title = {A transportbased multifidelity preconditioner for Markov chain Monte Carlo},
author = {Peherstorfer, B. and Marzouk, Y.},
journal = {Advances in Computational Mathematics},
volume = {45},
pages = {23212348},
year = {2019},
} 
[2] 
Peherstorfer, B. Multifidelity Monte Carlo estimation with adaptive lowfidelity models. SIAM/ASA Journal on Uncertainty Quantification, 7:579603, 2019. [Abstract]Abstract Multifidelity Monte Carlo (MFMC) estimation combines low and highfidelity models to speedup the estimation of statistics of the highfidelity model outputs. MFMC optimally samples the low and highfidelity models such that the MFMC estimator has minimal meansquared error for a given computational budget. In the setup of MFMC, the lowfidelity models are static, i.e., they are given and fixed and cannot be changed and adapted. We introduce the adaptive MFMC (AMFMC) method that splits the computational budget between adapting the lowfidelity models to improve their approximation quality and sampling the low and highfidelity models to reduce the meansquared error of the estimator. Our AMFMC approach derives the quasioptimal balance between adaptation and sampling in the sense that our approach minimizes an upper bound of the meansquared error, instead of the error directly. We show that the quasioptimal number of adaptations of the lowfidelity models is bounded even in the limit case that an infinite budget is available. This shows that adapting lowfidelity models in MFMC beyond a certain approximation accuracy is unnecessary and can even be wasteful. Our AMFMC approach tradesoff adaptation and sampling and so avoids overadaptation of the lowfidelity models. Besides the costs of adapting lowfidelity models, our AMFMC approach can also take into account the costs of the initial construction of the lowfidelity models (``offline costs''), which is critical if lowfidelity models are computationally expensive to build such as reduced models and datafit surrogate models. Numerical results demonstrate that our adaptive approach can achieve orders of magnitude speedups compared to MFMC estimators with static lowfidelity models and compared to Monte Carlo estimators that use the highfidelity model alone. [BibTeX]@article{P19AMFMC,
title = {Multifidelity Monte Carlo estimation with adaptive lowfidelity models},
author = {Peherstorfer, B.},
journal = {SIAM/ASA Journal on Uncertainty Quantification},
volume = {7},
pages = {579603},
year = {2019},
} 
[3] 
Kramer, B., Marques, A., Peherstorfer, B., Villa, U. & Willcox, K. Multifidelity probability estimation via fusion of estimators. Journal of Computational Physics, 392:385402, 2019. [Abstract]Abstract This paper develops a multifidelity method that enables estimation of failure probabilities for expensivetoevaluate models via information fusion and importance sampling. The presented general fusion method combines multiple probability estimators with the goal of variance reduction. We use lowfidelity models to derive biasing densities for importance sampling and then fuse the importance sampling estimators such that the fused multifidelity estimator is unbiased and has meansquared error lower than or equal to that of any of the importance sampling estimators alone. By fusing all available estimators, the method circumvents the challenging problem of selecting the best biasing density and using only that density for sampling. A rigorous analysis shows that the fused estimator is optimal in the sense that it has minimal variance amongst all possible combinations of the estimators. The asymptotic behavior of the proposed method is demonstrated on a convectiondiffusionreaction partial differential equation model for which 1e+5 samples can be afforded. To illustrate the proposed method at scale, we consider a model of a free plane jet and quantify how uncertainties at the flow inlet propagate to a quantity of interest related to turbulent mixing. Compared to an importance sampling estimator that uses the highfidelity model alone, our multifidelity estimator reduces the required CPU time by 65% while achieving a similar coefficient of variation. [BibTeX]@article{KMPVW17Fusion,
title = {Multifidelity probability estimation via fusion of estimators},
author = {Kramer, B. and Marques, A. and Peherstorfer, B. and Villa, U. and Willcox, K.},
volume = {392},
pages = {385402},
year = {2019},
institution = {Journal of Computational Physics},
} 
[4] 
Swischuk, R., Mainini, L., Peherstorfer, B. & Willcox, K. Projectionbased model reduction: Formulations for physicsbased machine learning. Computers & Fluids, 179:704717, 2019. [Abstract]Abstract This paper considers the creation of parametric surrogate models for applications in science and engineering where the goal is to predict highdimensional output quantities of interest, such as pressure, temperature and strain fields. The proposed methodology develops a lowdimensional parametrization of these quantities of interest using the proper orthogonal decomposition (POD), and combines this parametrization with machine learning methods to learn the map between the input parameters and the POD expansion coefficients. The use of particular solutions in the POD expansion provides a way to embed physical constraints, such as boundary conditions and other features of the solution that must be preserved. The relative costs and effectiveness of four different machine learning techniquesâ€”neural networks, multivariate polynomial regression, knearestneighbors and decision treesâ€”are explored through two engineering examples. The first example considers prediction of the pressure field around an airfoil, while the second considers prediction of the strain field over a damaged composite panel. The case studies demonstrate the importance of embedding physical constraints within learned models, and also highlight the important point that the amount of model training data available in an engineering setting is often much less than it is in other machine learning applications, making it essential to incorporate knowledge from physical models. [BibTeX]@article{SMPK18PhysicsLearning,
title = {Projectionbased model reduction: Formulations for physicsbased machine learning},
author = {Swischuk, R. and Mainini, L. and Peherstorfer, B. and Willcox, K.},
journal = {Computers & Fluids},
volume = {179},
pages = {704717},
year = {2019},
} 
[5] 
Peherstorfer, B., Kramer, B. & Willcox, K. Multifidelity preconditioning of the crossentropy method for rare event simulation and failure probability estimation. SIAM/ASA Journal on Uncertainty Quantification, 6(2):737761, 2018. [Abstract]Abstract Accurately estimating rare event probabilities with Monte Carlo can become costly if for each sample a computationally expensive highfidelity model evaluation is necessary to approximate the system response. Variance reduction with importance sampling significantly reduces the number of required samples if a suitable biasing density is used. This work introduces a multifidelity approach that leverages a hierarchy of lowcost surrogate models to efficiently construct biasing densities for importance sampling. Our multifidelity approach is based on the crossentropy method that derives a biasing density via an optimization problem. We approximate the solution of the optimization problem at each level of the surrogatemodel hierarchy, reusing the densities found on the previous levels to precondition the optimization problem on the subsequent levels. With the preconditioning, an accurate approximation of the solution of the optimization problem at each level can be obtained from a few model evaluations only. In particular, at the highest level, only few evaluations of the computationally expensive highfidelity model are necessary. Our numerical results demonstrate that our multifidelity approach achieves speedups of several orders of magnitude in a thermal and a reactingflow example compared to the singlefidelity crossentropy method that uses a single model alone. [BibTeX]@article{PKW17MFCE,
title = {Multifidelity preconditioning of the crossentropy method for rare event simulation and failure probability estimation},
author = {Peherstorfer, B. and Kramer, B. and Willcox, K.},
journal = {SIAM/ASA Journal on Uncertainty Quantification},
volume = {6},
number = {2},
pages = {737761},
year = {2018},
} 
[6] 
Peherstorfer, B., Gunzburger, M. & Willcox, K. Convergence analysis of multifidelity Monte Carlo estimation. Numerische Mathematik, 139(3):683707, 2018. [Abstract]Abstract The multifidelity Monte Carlo method provides a general framework for combining cheap lowfidelity approximations of an expensive highfidelity model to accelerate the Monte Carlo estimation of statistics of the highfidelity model output. In this work, we investigate the properties of multifidelity Monte Carlo estimation in the setting where a hierarchy of approximations can be constructed with known error and cost bounds. Our main result is a convergence analysis of multifidelity Monte Carlo estimation, for which we prove a bound on the costs of the multifidelity Monte Carlo estimator under assumptions on the error and cost bounds of the lowfidelity approximations. The assumptions that we make are typical in the setting of similar Monte Carlo techniques. Numerical experiments illustrate the derived bounds. [BibTeX]@article{PWK16MFMCAsymptotics,
title = {Convergence analysis of multifidelity Monte Carlo estimation},
author = {Peherstorfer, B. and Gunzburger, M. and Willcox, K.},
journal = {Numerische Mathematik},
volume = {139},
number = {3},
pages = {683707},
year = {2018},
} 
[7] 
Qian, E., Peherstorfer, B., O'Malley, D., Vesselinov, V.V. & Willcox, K. Multifidelity Monte Carlo Estimation of Variance and Sensitivity Indices. SIAM/ASA Journal on Uncertainty Quantification, 6(2):683706, 2018. [Abstract]Abstract Variancebased sensitivity analysis provides a quantitative measure of how uncertainty in a model input contributes to uncertainty in the model output. Such sensitivity analyses arise in a wide variety of applications and are typically computed using Monte Carlo estimation, but the many samples required for Monte Carlo to be sufficiently accurate can make these analyses intractable when the model is expensive. This work presents a multifidelity approach for estimating sensitivity indices that leverages cheaper lowfidelity models to reduce the cost of sensitivity analysis while retaining accuracy guarantees via recourse to the original, expensive model. This paper develops new multifidelity estimators for variance and for the Sobol' main and total effect sensitivity indices. We discuss strategies for dividing limited computational resources among models and specify a recommended strategy. Results are presented for the Ishigami function and a convectiondiffusionreaction model that demonstrate up to 10x speedups for fixed convergence levels. For the problems tested, the multifidelity approach allows inputs to be definitively ranked in importance when Monte Carlo alone fails to do so. [BibTeX]@article{QPOVW17MFGSA,
title = {Multifidelity Monte Carlo Estimation of Variance and Sensitivity Indices},
author = {Qian, E. and Peherstorfer, B. and O'Malley, D. and Vesselinov, V.V. and Willcox, K.},
journal = {SIAM/ASA Journal on Uncertainty Quantification},
volume = {6},
number = {2},
pages = {683706},
year = {2018},
} 
[8] 
Baptista, R., Marzouk, Y., Willcox, K. & Peherstorfer, B. Optimal Approximations of Coupling in Multidisciplinary Models. AIAA Journal, 56:24122428, 2018. [Abstract]Abstract This paper presents a methodology for identifying important discipline couplings in multicomponent engineering systems. Coupling among disciplines contributes significantly to the computational cost of analyzing a system, and can become particularly burdensome when coupled analyses are embedded within a design or optimization loop. In many cases, disciplines may be weakly coupled, so that some of the coupling or interaction terms can be neglected without significantly impacting the accuracy of the system output. Typical practice derives such approximations in an ad hoc manner using expert opinion and domain experience. This work proposes a new approach that formulates an optimization problem to find a model that optimally balances accuracy of the model outputs with the sparsity of the discipline couplings. An adaptive sequential Monte Carlo samplingbased technique is used to efficiently search the combinatorial model space of different discipline couplings. An algorithm for selecting an optimal model is presented and illustrated in a fire detection satellite model and a turbine engine cycle analysis model. [BibTeX]@article{AIAADecouple18Baptista,
title = {Optimal Approximations of Coupling in Multidisciplinary Models},
author = {Baptista, R. and Marzouk, Y. and Willcox, K. and Peherstorfer, B.},
journal = {AIAA Journal},
volume = {56},
pages = {24122428},
year = {2018},
} 
[9] 
Zimmermann, R., Peherstorfer, B. & Willcox, K. Geometric subspace updates with applications to online adaptive nonlinear model reduction. SIAM Journal on Matrix Analysis and Applications, 39(1):234261, 2018. [Abstract]Abstract In many scientific applications, including model reduction and image processing, subspaces are used as ansatz spaces for the lowdimensional approximation and reconstruction of the state vectors of interest. We introduce a procedure for adapting an existing subspace based on information from the leastsquares problem that underlies the approximation problem of interest such that the associated leastsquares residual vanishes exactly. The method builds on a Riemmannian optimization procedure on the Grassmann manifold of lowdimensional subspaces, namely the Grassmannian RankOne Subspace Estimation (GROUSE). We establish for GROUSE a closedform expression for the residual function along the geodesic descent direction. Specific applications of subspace adaptation are discussed in the context of image processing and model reduction of nonlinear partial differential equation systems. [BibTeX]@article{ZPW17SIMAXManifold,
title = {Geometric subspace updates with applications to online adaptive nonlinear model reduction},
author = {Zimmermann, R. and Peherstorfer, B. and Willcox, K.},
journal = {SIAM Journal on Matrix Analysis and Applications},
volume = {39},
number = {1},
pages = {234261},
year = {2018},
} 
[10] 
Peherstorfer, B., Willcox, K. & Gunzburger, M. Survey of multifidelity methods in uncertainty propagation, inference, and optimization. SIAM Review, 60(3):550591, 2018. [Abstract]Abstract In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs and varying fidelities. Typically, a computationally expensive highfidelity model describes the system with the accuracy required by the current application at hand, while lowerfidelity models are less accurate but computationally cheaper than the highfidelity model. Outerloop applications, such as optimization, inference, and uncertainty quantification, require multiple model evaluations at many different inputs, which often leads to computational demands that exceed available resources if only the highfidelity model is used. This work surveys multifidelity methods that accelerate the solution of outerloop applications by combining highfidelity and lowfidelity model evaluations, where the lowfidelity evaluations arise from an explicit lowfidelity model (e.g., a simplified physics approximation, a reduced model, a datafit surrogate, etc.) that approximates the same output quantity as the highfidelity model. The overall premise of these multifidelity methods is that lowfidelity models are leveraged for speedup while the highfidelity model is kept in the loop to establish accuracy and/or convergence guarantees. We categorize multifidelity methods according to three classes of strategies: adaptation, fusion, and filtering. The paper reviews multifidelity methods in the outerloop contexts of uncertainty propagation, inference, and optimization. [BibTeX]@article{PWG17MultiSurvey,
title = {Survey of multifidelity methods in uncertainty propagation, inference, and optimization},
author = {Peherstorfer, B. and Willcox, K. and Gunzburger, M.},
journal = {SIAM Review},
volume = {60},
number = {3},
pages = {550591},
year = {2018},
} 
[11] 
Peherstorfer, B., Gugercin, S. & Willcox, K. Datadriven reduced model construction with timedomain Loewner models. SIAM Journal on Scientific Computing, 39(5):A2152A2178, 2017. [Abstract]Abstract This work presents a datadriven nonintrusive model reduction approach for largescale timedependent systems with linear state dependence. Traditionally, model reduction is performed in an intrusive projectionbased framework, where the operators of the full model are required either explicitly in an assembled form or implicitly through a routine that returns the action of the operators on a vector. Our nonintrusive approach constructs reduced models directly from trajectories of the inputs and outputs of the full model, without requiring the fullmodel operators. These trajectories are generated by running a simulation of the full model; our method then infers frequencyresponse data from these simulated timedomain trajectories and uses the datadriven Loewner framework to derive a reduced model. Only a single timedomain simulation is required to derive a reduced model with the new datadriven nonintrusive approach. We demonstrate our model reduction method on several benchmark examples and a finite element model of a cantilever beam; our approach recovers the classical Loewner reduced models and, for these problems, yields highquality reduced models despite treating the full model as a black box. [BibTeX]@article{PSW16TLoewner,
title = {Datadriven reduced model construction with timedomain Loewner models},
author = {Peherstorfer, B. and Gugercin, S. and Willcox, K.},
journal = {SIAM Journal on Scientific Computing},
volume = {39},
number = {5},
pages = {A2152A2178},
year = {2017},
} 
[12] 
Peherstorfer, B., Kramer, B. & Willcox, K. Combining multiple surrogate models to accelerate failure probability estimation with expensive highfidelity models. Journal of Computational Physics, 341:6175, 2017. [Abstract]Abstract In failure probability estimation, importance sampling constructs a biasing distribution that targets the failure event such that a small number of model evaluations is sufficient to achieve a Monte Carlo estimate of the failure probability with an acceptable accuracy; however, the construction of the biasing distribution often requires a large number of model evaluations, which can become computationally expensive. We present a mixed multifidelity importance sampling (MMFIS) approach that leverages computationally cheap but erroneous surrogate models for the construction of the biasing distribution and that uses the original highfidelity model to guarantee unbiased estimates of the failure probability. The key property of our MMFIS estimator is that it can leverage multiple surrogate models for the construction of the biasing distribution, instead of a single surrogate model alone. We show that our MMFIS estimator has a meansquared error that is up to a constant lower than the meansquared errors of the corresponding estimators that uses any of the given surrogate models aloneeven in settings where no information about the approximation qualities of the surrogate models is available. In particular, our MMFIS approach avoids the problem of selecting the surrogate model that leads to the estimator with the lowest meansquared error, which is challenging if the approximation quality of the surrogate models is unknown. We demonstrate our MMFIS approach on numerical examples, where we achieve orders of magnitude speedups compared to using the highfidelity model only. [BibTeX]@article{PKW16MixedMFIS,
title = {Combining multiple surrogate models to accelerate failure probability estimation with expensive highfidelity models},
author = {Peherstorfer, B. and Kramer, B. and Willcox, K.},
journal = {Journal of Computational Physics},
volume = {341},
pages = {6175},
year = {2017},
} 
[13] 
Kramer, B., Peherstorfer, B. & Willcox, K. Feedback Control for Systems with Uncertain Parameters Using OnlineAdaptive Reduced Models. SIAM Journal on Applied Dynamical Systems, 16(3):15631586, 2017. [Abstract]Abstract We consider control and stabilization for largescale dynamical systems with uncertain, timevarying parameters. The timecritical task of controlling a dynamical system poses major challenges: Using largescale models is prohibitive, and accurately inferring parameters can be expensive, too. We address both problems by proposing an offlineonline strategy for controlling systems with timevarying parameters. During the offline phase, we use a highfidelity model to compute a library of optimal feedback controller gains over a sampled set of parameter values. Then, during the online phase, in which the uncertain parameter changes over time, we learn a reducedorder model from system data. The learned reducedorder model is employed within an optimization routine to update the feedback control throughout the online phase. Since the system data naturally reflects the uncertain parameter, the datadriven updating of the controller gains is achieved without an explicit parameter estimation step. We consider two numerical test problems in the form of partial differential equations: a convectiondiffusion system, and a model for flow through a porous medium. We demonstrate on those models that the proposed method successfully stabilizes the system model in the presence of process noise. [BibTeX]@article{KPW16ControlAdaptROM,
title = {Feedback Control for Systems with Uncertain Parameters Using OnlineAdaptive Reduced Models},
author = {Kramer, B. and Peherstorfer, B. and Willcox, K.},
journal = {SIAM Journal on Applied Dynamical Systems},
volume = {16},
number = {3},
pages = {15631586},
year = {2017},
} 
[14] 
Peherstorfer, B., Willcox, K. & Gunzburger, M. Optimal model management for multifidelity Monte Carlo estimation. SIAM Journal on Scientific Computing, 38(5):A3163A3194, 2016. [Abstract]Abstract This work presents an optimal model management strategy that exploits multifidelity surrogate models to accelerate the estimation of statistics of outputs of computationally expensive highfidelity models. Existing acceleration methods typically exploit a multilevel hierarchy of surrogate models that follow a known rate of error decay and computational costs; however, a general collection of surrogate models, which may include projectionbased reduced models, datafit models, support vector machines, and simplifiedphysics models, does not necessarily give rise to such a hierarchy. Our multifidelity approach provides a framework to combine an arbitrary number of surrogate models of any type. Instead of relying on error and cost rates, an optimization problem balances the number of model evaluations across the highfidelity and surrogate models with respect to error and costs. We show that a unique analytic solution of the model management optimization problem exists under mild conditions on the models. Our multifidelity method makes occasional recourse to the highfidelity model; in doing so it provides an unbiased estimator of the statistics of the highfidelity model, even in the absence of error bounds and error estimators for the surrogate models. Numerical experiments with linear and nonlinear examples show that speedups by orders of magnitude are obtained compared to Monte Carlo estimation that invokes a single model only. [BibTeX]@article{Peherstorfer15Multi,
title = {Optimal model management for multifidelity Monte Carlo estimation},
author = {Peherstorfer, B. and Willcox, K. and Gunzburger, M.},
journal = {SIAM Journal on Scientific Computing},
volume = {38},
number = {5},
pages = {A3163A3194},
year = {2016},
} 
[15] 
Peherstorfer, B. & Willcox, K. Datadriven operator inference for nonintrusive projectionbased model reduction. Computer Methods in Applied Mechanics and Engineering, 306:196215, 2016. [Abstract]Abstract This work presents a nonintrusive projectionbased model reduction approach for full models based on timedependent partial differential equations. Projectionbased model reduction constructs the operators of a reduced model by projecting the equations of the full model onto a reduced space. Traditionally, this projection is intrusive, which means that the fullmodel operators are required either explicitly in an assembled form or implicitly through a routine that returns the action of the operators on a given vector; however, in many situations the full model is given as a black box that computes trajectories of the fullmodel states and outputs for given initial conditions and inputs, but does not provide the fullmodel operators. Our nonintrusive operator inference approach infers approximations of the reduced operators from the initial conditions, inputs, trajectories of the states, and outputs of the full model, without requiring the fullmodel operators. Our operator inference is applicable to full models that are linear in the state or have a loworder polynomial nonlinear term. The inferred operators are the solution of a leastsquares problem and converge, with sufficient state trajectory data, in the Frobenius norm to the reduced operators that would be obtained via an intrusive projection of the fullmodel operators. Our numerical results demonstrate operator inference on a linear climate model and on a tubular reactor model with a polynomial nonlinear term of third order. [BibTeX]@article{Peherstorfer16DataDriven,
title = {Datadriven operator inference for nonintrusive projectionbased model reduction},
author = {Peherstorfer, B. and Willcox, K.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {306},
pages = {196215},
year = {2016},
} 
[16] 
Peherstorfer, B. & Willcox, K. Dynamic datadriven model reduction: Adapting reduced models from incomplete data. Advanced Modeling and Simulation in Engineering Sciences, 3(11), 2016. [Abstract]Abstract This work presents a datadriven online adaptive model reduction approach for systems that undergo dynamic changes. Classical model reduction constructs a reduced model of a largescale system in an offline phase and then keeps the reduced model unchanged during the evaluations in an online phase; however, if the system changes online, the reduced model may fail to predict the behavior of the changed system. Rebuilding the reduced model from scratch is often too expensive in timecritical and realtime environments. We introduce a dynamic datadriven adaptation approach that adapts the reduced model from incomplete sensor data obtained from the system during the online computations. The updates to the reduced models are derived directly from the incomplete data, without recourse to the full model. Our adaptivity approach approximates the missing values in the incomplete sensor data with gappy proper orthogonal decomposition. These approximate data are then used to derive lowrank updates to the reduced basis and the reduced operators. In our numerical examples, incomplete data with 3040 percent known values are sufficient to recover the reduced model that would be obtained via rebuilding from scratch. [BibTeX]@article{Peherstorfer16AdaptROM,
title = {Dynamic datadriven model reduction: Adapting reduced models from incomplete data},
author = {Peherstorfer, B. and Willcox, K.},
journal = {Advanced Modeling and Simulation in Engineering Sciences},
volume = {3},
number = {11},
year = {2016},
} 
[17] 
Peherstorfer, B., Cui, T., Marzouk, Y. & Willcox, K. Multifidelity Importance Sampling. Computer Methods in Applied Mechanics and Engineering, 300:490509, 2016. [Abstract]Abstract Estimating statistics of model outputs with the Monte Carlo method often requires a large number of model evaluations. This leads to long runtimes if the model is expensive to evaluate. Importance sampling is one approach that can lead to a reduction in the number of model evaluations. Importance sampling uses a biasing distribution to sample the model more efficiently, but generating such a biasing distribution can be difficult and usually also requires model evaluations. A different strategy to speed up Monte Carlo sampling is to replace the computationally expensive highfidelity model with a computationally cheap surrogate model; however, because the surrogate model outputs are only approximations of the highfidelity model outputs, the estimate obtained using a surrogate model is in general biased with respect to the estimate obtained using the highfidelity model. We introduce a multifidelity importance sampling (MFIS) method, which combines evaluations of both the highfidelity and a surrogate model. It uses a surrogate model to facilitate the construction of the biasing distribution, but relies on a small number of evaluations of the highfidelity model to derive an unbiased estimate of the statistics of interest. We prove that the MFIS estimate is unbiased even in the absence of accuracy guarantees on the surrogate model itself. The MFIS method can be used with any type of surrogate model, such as projectionbased reducedorder models and datafit models. Furthermore, the MFIS method is applicable to blackbox models, i.e., where only inputs and the corresponding outputs of the highfidelity and the surrogate model are available but not the details of the models themselves. We demonstrate on nonlinear and timedependent problems that our MFIS method achieves speedups of up to several orders of magnitude compared to Monte Carlo with importance sampling that uses the highfidelity model only. [BibTeX]@article{Peherstorfer16MFIS,
title = {Multifidelity Importance Sampling},
author = {Peherstorfer, B. and Cui, T. and Marzouk, Y. and Willcox, K.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {300},
pages = {490509},
year = {2016},
} 
[18] 
Peherstorfer, B. & Willcox, K. Online Adaptive Model Reduction for Nonlinear Systems via LowRank Updates. SIAM Journal on Scientific Computing, 37(4):A2123A2150, 2015. [Abstract]Abstract This work presents a nonlinear model reduction approach for systems of equations stemming from the discretization of partial differential equations with nonlinear terms. Our approach constructs a reduced system with proper orthogonal decomposition and the discrete empirical interpolation method (DEIM); however, whereas classical DEIM derives a linear approximation of the nonlinear terms in a static DEIM space generated in an offline phase, our method adapts the DEIM space as the online calculation proceeds and thus provides a nonlinear approximation. The online adaptation uses new data to produce a reduced system that accurately approximates behavior not anticipated in the offline phase. These online data are obtained by querying the fullorder system during the online phase, but only at a few selected components to guarantee a computationally efficient adaptation. Compared to the classical static approach, our online adaptive and nonlinear model reduction approach achieves accuracy improvements of up to three orders of magnitude in our numerical experiments with timedependent and steadystate nonlinear problems. The examples also demonstrate that through adaptivity, our reduced systems provide valid approximations of the fullorder systems outside of the parameter domains for which they were initially built in the offline phase. [BibTeX]@article{Peherstorfer15aDEIM,
title = {Online Adaptive Model Reduction for Nonlinear Systems via LowRank Updates},
author = {Peherstorfer, B. and Willcox, K.},
journal = {SIAM Journal on Scientific Computing},
volume = {37},
number = {4},
pages = {A2123A2150},
year = {2015},
} 
[19] 
Peherstorfer, B., Gómez, P. & Bungartz, H.J. Reduced Models for Sparse Grid Discretizations of the MultiAsset BlackScholes Equation. Advances in Computational Mathematics, 41(5):13651389, 2015. [Abstract]Abstract This work presents reduced models for pricing basket options with the BlackScholes and the Heston model. Basket options lead to multidimensional partial differential equations (PDEs) that quickly become computationally infeasible to discretize on full tensor grids. We therefore rely on sparse grid discretizations of the PDEs, which allow us to cope with the curse of dimensionality to some extent. We then derive reduced models with proper orthogonal decomposition. Our numerical results with the BlackScholes model show that sufficiently accurate results are achieved while gaining speedups between 80 and 160 compared to the highfidelity sparse grid model for 2, 3, and 4asset options. For the Heston model, results are presented for a singleasset option that leads to a twodimensional pricing problem, where we achieve significant speedups with our model reduction approach based on highfidelity sparse grid models. [BibTeX]@article{pehersto15BlackScholes,
title = {Reduced Models for Sparse Grid Discretizations of the MultiAsset BlackScholes Equation},
author = {Peherstorfer, B. and Gómez, P. and Bungartz, H.J.},
journal = {Advances in Computational Mathematics},
volume = {41},
number = {5},
pages = {13651389},
year = {2015},
} 
[20] 
Peherstorfer, B. & Willcox, K. Dynamic DataDriven ReducedOrder Models. Computer Methods in Applied Mechanics and Engineering, 291:2141, 2015. [Abstract]Abstract Datadriven model reduction constructs reducedorder models of largescale systems by learning the system response characteristics from data. Existing methods build the reducedorder models in a computationally expensive offline phase and then use them in an online phase to provide fast predictions of the system. In cases where the underlying system properties are not static but undergo dynamic changes, repeating the offline phase after each system change to rebuild the reducedorder model from scratch forfeits the savings gained in the online phase. This paper proposes dynamic reducedorder models that break with this classical but rigid approach. Dynamic reducedorder models exploit the opportunity presented by dynamic sensor data and adaptively incorporate sensor data during the online phase. This permits online adaptation to system changes while circumventing the expensive rebuilding of the model. A computationally cheap adaptation is achieved by constructing lowrank updates to the reduced operators. With these updates and with sufficient and accurate data, our approach recovers the same model that would be obtained by rebuilding from scratch. We demonstrate dynamic reducedorder models on a structural assessment example in the context of realtime decision making. We consider a plate in bending where the dynamic reducedorder model quickly adapts to changes in structural properties and achieves speedups of four orders of magnitude compared to rebuilding a model from scratch. [BibTeX]@article{pehersto15dynamic,
title = {Dynamic DataDriven ReducedOrder Models},
author = {Peherstorfer, B. and Willcox, K.},
journal = {Computer Methods in Applied Mechanics and Engineering},
volume = {291},
pages = {2141},
year = {2015},
} 
[21] 
Peherstorfer, B., Zimmer, S., Zenger, C. & Bungartz, H.J. A Multigrid Method for Adaptive Sparse Grids. SIAM Journal on Scientific Computing, 37(5):S51S70, 2015. [Abstract]Abstract Sparse grids have become an important tool to reduce the number of degrees of freedom of discretizations of moderately highdimensional partial differential equations; however, the reduction in degrees of freedom comes at the cost of an almost dense and unconventionally structured system of linear equations. To guarantee overall efficiency of the sparse grid approach, special linear solvers are required. We present a multigrid method that exploits the sparse grid structure to achieve an optimal runtime that scales linearly with the number of sparse grid points. Our approach is based on a novel decomposition of the righthand sides of the coarse grid equations that leads to a reformulation in socalled auxiliary coefficients. With these auxiliary coefficients, the righthand sides can be represented in a nodal point basis on lowdimensional full grids. Our proposed multigrid method directly operates in this auxiliary coefficient representation, circumventing most of the computationally cumbersome sparse grid structure. Numerical results on nonadaptive and spatially adaptive sparse grids confirm that the runtime of our method scales linearly with the number of sparse grid points and they indicate that the obtained convergence factors are bounded independently of the mesh width. [BibTeX]@article{peherstorfer15htmg,
title = {A Multigrid Method for Adaptive Sparse Grids},
author = {Peherstorfer, B. and Zimmer, S. and Zenger, C. and Bungartz, H.J.},
journal = {SIAM Journal on Scientific Computing},
volume = {37},
number = {5},
pages = {S51S70},
year = {2015},
} 
[22] 
Peherstorfer, B., Butnaru, D., Willcox, K. & Bungartz, H.J. Localized Discrete Empirical Interpolation Method. SIAM Journal on Scientific Computing, 36(1):A168A192, 2014. [Abstract]Abstract This paper presents a new approach to construct more efficient reducedorder models for nonlinear partial differential equations with proper orthogonal decomposition and the discrete empirical interpolation method (DEIM). Whereas DEIM projects the nonlinear term onto one global subspace, our localized discrete empirical interpolation method (LDEIM) computes several local subspaces, each tailored to a particular region of characteristic system behavior. Then, depending on the current state of the system, LDEIM selects an appropriate local subspace for the approximation of the nonlinear term. In this way, the dimensions of the local DEIM subspaces, and thus the computational costs, remain low even though the system might exhibit a wide range of behaviors as it passes through different regimes. LDEIM uses machine learning methods in the offline computational phase to discover these regions via clustering. Local DEIM approximations are then computed for each cluster. In the online computational phase, machinelearningbased classification procedures select one of these local subspaces adaptively as the computation proceeds. The classification can be achieved using either the system parameters or a lowdimensional representation of the current state of the system obtained via feature extraction. The LDEIM approach is demonstrated for a reacting flow example of an H_2Air flame. In this example, where the system state has a strong nonlinear dependence on the parameters, the LDEIM provides speedups of two orders of magnitude over standard DEIM. [BibTeX]@article{peherstorfer13localized,
title = {Localized Discrete Empirical Interpolation Method},
author = {Peherstorfer, B. and Butnaru, D. and Willcox, K. and Bungartz, H.J.},
journal = {SIAM Journal on Scientific Computing},
volume = {36},
number = {1},
pages = {A168A192},
year = {2014},
} 
[23] 
Peherstorfer, B., Kowitz, C., Pflüger, D. & Bungartz, H.J. Selected Recent Applications of Sparse Grids. Numerical Mathematics: Theory, Methods and Applications, 8(1):4777, 2014. [Abstract]Abstract Sparse grids have become a versatile tool for a vast range of applications reaching from interpolation and numerical quadrature to datadriven problems and uncertainty quantification. We review four selected realworld applications of sparse grids: financial product pricing with the BlackScholes model, interactive exploration of simulation data with sparsegridbased surrogate models, analysis of simulation data through sparse grid data mining methods, and stability investigations of plasma turbulence simulations. [BibTeX]@article{Peherstorfer14SGReview,
title = {Selected Recent Applications of Sparse Grids},
author = {Peherstorfer, B. and Kowitz, C. and Pflüger, D. and Bungartz, H.J.},
journal = {Numerical Mathematics: Theory, Methods and Applications},
volume = {8},
number = {1},
pages = {4777},
year = {2014},
} 
[24] 
Pflüger, D., Peherstorfer, B. & Bungartz, H.J. Spatially adaptive sparse grids for highdimensional datadriven problems. Journal of Complexity, 26(5):508522, 2010. [Abstract]Abstract Sparse grids allow one to employ gridbased discretization methods in datadriven problems. We present an extension of the classical sparse grid approach that allows us to tackle highdimensional problems by spatially adaptive refinement, modified ansatz functions, and efficient regularization techniques. The competitiveness of this method is shown for typical benchmark problems with up to 166 dimensions for classification in data mining, pointing out properties of sparse grids in this context. To gain insight into the adaptive refinement and to examine the scope for further improvements, the approximation of nonsmooth indicator functions with adaptive sparse grids has been studied as a model problem. As an example for an improved adaptive grid refinement, we present results for an edgedetection strategy. [BibTeX]@article{pflueger10spatially,
title = {Spatially adaptive sparse grids for highdimensional datadriven problems},
author = {Pflüger, D. and Peherstorfer, B. and Bungartz, H.J.},
journal = {Journal of Complexity},
volume = {26},
number = {5},
pages = {508522},
year = {2010},
} 
Full list

