Events
MaD Seminar: Learning under group equivariance: a perspective from harmonic analysis
Speaker: Theodor Misiakiewicz (Yale)
Location: 60 Fifth Avenue, Room 150
Date: Thursday, April 24, 2025
Modern machine learning heavily relies on the success of large-scale models trained via gradient-based algorithms. A major effort in recent years has been to understand the fundamental limits of these learning algorithms: What is the complexity of gradient-based training? Which distributions can these algorithms learn efficiently? Can we identify simple principles underlying feature learning?
To make progress on these questions, we focus in this talk on a key property of `generic’ gradient-based methods: their equivariance with respect to a large symmetry group G. Motivated by this observation, we develop a group-theoretic framework to analyze the complexity of G-equivariant learning algorithms when trained on a target distribution D. Focusing on statistical query (SQ) algorithms, we introduce two complexity measures—the Alignment complexity and the Leap complexity—which aim to quantify the difficulty of weak and strong learning under group equivariance. These measures depend explicitly on the interplay between the data distribution D, the group G, and the allowed statistics Q, and can be computed using tools from harmonic analysis.
We show that this framework reveals a natural factorization of the symmetry group and data distribution, and suggests a sequential, adaptive learning process. Finally, using our results, we revisit recent work on learning juntas and multi-index models, and provide new insights into learning Gaussian single-index models.
This is based on joint work with Hugo Koubbi (Yale), Nirmit Joshi (TTIC), and Nati Srebro (TTIC).