Computational Mathematics and Scientific Computing Seminar

Operator Learning: From Theory to Practice

Time and Location:

Feb. 02, 2024 at 10AM; Warren Weaver Hall, Room 1302

Speaker:

Nikola Kovachki, NVIDIA and California Institute of Technology

Abstract:

We present a general framework for approximating non-linear maps between infinite dimensional Banach spaces from observations. 

Our approach follows the "discretize last" philosophy by designing approximation architectures directly on the function spaces of interest without tying parameters to any finite dimensional discretization. Such architectures exhibit an approximation error that is independent of the training data discretization and can utilize data sources with diverse discretization common to many engineering problems. We review the infinite-dimensional approximation theory for such architectures, showing the universal approximation property and the manifestation of the curse of dimensionality translating algebraic rates in finite dimensions to exponential rates in infinite dimensions. We discuss efficient approximation of certain operators arising from parametric partial differential equations (PDEs) and show that efficient parametric approximation implies efficient data approximation. We demonstrate the utility of our framework numerically on a variety of large-scale problems arising in fluid dynamics, porous media flow, weather modeling, and crystal plasticity. Our results show that data-driven methods can provide orders of magnitude in computational speed-up at a fixed accuracy compared to classical numerical methods and hold immense promise in modeling complex physical phenomena across multiple scales.