CDS LUNCH SEMINAR: How Neural Networks See, Learn and Forget

Speaker: Maithra Raghu

Location: On-Line

Date: Wednesday, December 1, 2021

Neural networks have been at the heart of machine learning breakthroughs over the past several years. Yet even as they become ubiquitous and more standardized, new developments challenge our assumptions on how they function. In this talk, the speaker will overview  her work studying the inner workings of new neural network models for vision, as well as insights on learning and forgetting behavior in these systems. Specifically, she will discuss the recent successes of Transformers in computer vision, and show how these might arise from key differences in their learned visual representations. The speaker will examine consequences for (transfer) learning in these models. Further investigating learning in these systems, she will next share findings on continual learning in neural networks, and ways in which catastrophic forgetting manifests in their representations. Finally, speaker will explore connections to task semantics, and new forgetting mitigation methods suggested by these insights.