Computational Mathematics and Scientific Computing Seminar

Operator learning without the adjoint

Time and Location:

March 06, 2026 at 10AM; Warren Weaver Hall, Room 1302

Speaker:

Diana Halikias, New York University

Abstract:

There is a mystery at the heart of operator learning: how can one recover a non-self-adjoint operator from data without probing the adjoint? Current practical approaches suggest that one can accurately recover an operator while only using data generated by the forward action of the operator without access to the adjoint. However, naively, it seems essential to sample the action of the adjoint. We partially explain this mystery by proving that without querying the adjoint, one can approximate a family of non-self-adjoint infinite-dimensional compact operators via projection onto a Fourier basis. We then apply the result to recovering Green's functions of elliptic partial differential operators and derive an adjoint-free sample complexity bound. While existing theory justifies low sample complexity in operator learning, ours is the first adjoint-free analysis that attempts to close the gap between theory and practice. Time permitting, we also explore a closely related question in randomized numerical linear algebra: when is access to both forward and transpose matrix-vector products essential? We discuss the role of transpose access in sketching algorithms for low-rank approximation, least-squares problems, and norm estimation.