Events
The Gaussian world is not enough - how data shapes neural network representations
Speaker: Sebastian Goldt
Location: Off-Campus, Room CCN Classroom, 4th Floor, 160 Fifth Ave
Date: Thursday, April 13, 2023
Neural networks are powerful feature extractors - but which features do they extract from their data? And how does the structure of the training data shape the representations they learn? We discuss this question from three different points of view. First, we present analytical and experimental evidence for a “distributional simplicity bias”, whereby neural networks learn increasingly complex distributions of their inputs during training, going from a simple perceptron up to deep ResNets. We then show that neural networks can learn information in their higher-order cumulants more efficiently than lazy methods. We finally develop a simple model for images and show that a neural network trained on these images learn a convolution from scratch by exploiting the structure of the higher-order cumulants of the “images”.
Speaker: Sebastian Goldt Group Leader, Theory of Neural Networks Group, SISSA. He will be available on Wednesday, Thursday, and Friday of that week for meetings. (If you are interested in attending the talk in-person, please email Jessica for a guest registration at Flatiron. To schedule a meeting with Sebastian during his visit, please contact our colleague, Jessica Hauser, at jhauser@flatironinstitute.org.)