Computational Mathematics and Scientific Computing Seminar

Machine Learning for Differential Equation Modeling: Statistics and Computation

Time and Location:

Feb. 24, 2023 at 10AM; Warren Weaver Hall, Room 1302

Speaker:

Yiping Lu, Stanford

Abstract:

Massive data collection and computational capabilities have enabled data-driven scientific discoveries and control of engineering systems. However, there are still several questions that should be answered to understand the fundamental limits of just how much can be discovered with data and what is the value of additional information. For example, 1) How can we learn a physics law or economic principle purely from data? 2) How hard is this task, both computationally and statistically? 3) What’s the impact on hardness when we add further information (e.g., adding data, model information)? I’ll answer these three questions in this talk in two learning tasks. A key insight in both two cases is that using direct plug-in estimators can result in statistically suboptimal inference.

The first learning task I’ll discuss is linear operator learning/functional data analysis, which has wide applications in causal inference, time series modeling, and conditional probability learning. We build the first min-max lower bound for this problem. The min-max rate has a particular structure where the more challenging parts of the input and output spaces determine the hardness of learning a linear operator. Our analysis also shows that an intuitive discretization of the infinite-dimensional operator could lead to a sub-optimal statistical learning rate. Then, I’ll discuss how, by suitably trading-off bias and variance, we can construct an estimator with an optimal learning rate for learning a linear operator between infinite dimension spaces. We also illustrate how this theory can inspire a multilevel machine-learning algorithm of potential practical use.

For the second learning task, we focus on variational formulations for differential equation models. We discuss a prototypical Poisson equation. We provide a minimax lower bound for this problem. Based on the lower bounds, we discover that the variance in the direct plug-in estimator makes sample complexity suboptimal. We also consider the optimization dynamic for different variational forms. Finally, based on our theory, we explain an implicit acceleration of using a Sobolev norm as the objective function for training.