Compositional Learning for Concepts and Perception

Speaker: Kevin Ellis

Location: 60 Fifth Avenue, Room 7th Floor Open Space

Date: Monday, August 7, 2023

Reception (with light lunch) 11:30AM-12:30PM
Talk 12:30PM-1:45 PM
Center for Data Science, 60 5th Ave.
7th floor open space

Abstract: A classic idea in artificial intelligence and cognitive science is that knowledge should be encoded in an algebraic representation that allows combining smaller concepts to build bigger ones. The ability to compose new knowledge out of old can allow learners to generalize systematically, and to modularly transfer expertise across tasks. Computationally though, it is difficult to design algorithms that can implement this idea. This is because as soon as free-form composition of concepts is allowed, the learner faces an explosion of possible concepts, and it is hard to figure out exactly what should be learned.This talk gives two updates on how to build more compositional machine-learners. First, how to use pretrained neural networks to efficiently home in on the right combination of concepts, focusing on abstract symbolic problems involving natural numbers and sets. Second, how compositional learners can ground their inferences in raw high-dimensional inputs, so that they can start with low-level perceptual data and come to learn and reason using more high-level representations.Joint work with Hao Tang.
Related papers: https://arxiv.org/pdf/2206.05922.pdf , https://arxiv.org/pdf/2306.02797.pdf

Bio/Background: Kevin Ellis works on artificial intelligence and program synthesis at Cornell University, where he is an assistant professor of computer science. Previously, he was a PhD student in Cognitive Science at MIT, and a research scientist at Common Sense Machines working on 3D deep learning.