Events
Challenges and Promises in Invariant Learning
Speaker: Yoav Wald
Location: 60 Fifth Avenue, Room 7th floor
Date: Wednesday, March 8, 2023
Invariant learning is an emerging paradigm for learning models that are stable under distribution shifts or satisfy fairness conditions. The first part of the talk will cover work that studies model calibration over several datasets as a type of invariance that is useful for Out-of-Distribution Generalization, and demonstrates the potential of invariant learning.
However, multiple recent studies empirically demonstrated that common invariant learning methods are ineffective in the over-parameterized regime, where classifiers perfectly fit (i.e. interpolate) the training data. This suggests that the phenomenon of “benign overfitting,” in which models generalize well despite interpolating, might not extend favorably to settings where robustness or fairness are desirable. Our work provides theoretical justification for these observations. We prove that even in the simplest of settings any interpolating learning rule will not satisfy invariance. On the other hand, we show a non-interpolating algorithm that can provably learn an invariant model in the same setting.