CDS Seminar: Towards more human-like learning in machines: Bridging the data and generalization gaps

Speaker: Brenden Lake

Location: 60 Fifth Avenue, Room 150

Date: Friday, March 7, 2025

There is an enormous data gap between how AI systems and children learn: The best LLMs now learn language from text with a word count in the trillions, whereas it would take a child roughly 100K years to reach those numbers through speech (Frank, 2023, "Bridging the data gap"). There is also a clear generalization gap: whereas machines struggle with systematic generalization, children can excel. For instance, once a child learns how to "skip," they immediately know how to "skip twice" or "skip around the room with their hands up" due to their compositional skills. In this talk, I'll describe two case studies in addressing these gaps. 1) The data gap: We train deep neural networks from scratch, not on large-scale data from the web, but through the eyes and ears of a single child. Using head-mounted video recordings from a child as training data (<200 hours of video slices over 26 months), we show how deep neural networks can perform challenging visual tasks, acquire many word-referent mappings, generalize to novel visual referents, and achieve multi-modal alignment. Our results demonstrate how today's AI models are capable of learning key aspects of children's early knowledge from realistic input. 2) The generalization gap: Can neural networks capture human-like systematic generalization? We address a 35-year-old debate catalyzed by Fodor and Pylyshyn's classic article, which argued that standard neural networks are not viable models of the mind because they lack systematic compositionality -- the algebraic ability to understand and produce novel combinations from known components. We'll show how neural networks can achieve human-like systematic generalization when trained through meta-learning for compositionality (MLC), a new method for optimizing the compositional skills of neural networks through practice. With MLC, neural networks can match human performance and solve several machine learning benchmarks. Given these findings, we'll discuss the paths forward for building machines that learn, generalize, and interact in more human-like ways based on more natural input, and for addressing classic debates in cognitive science through advances in AI.