andrewphoto

Andrew Gordon Wilson

Talks

How Do We Build a General Intelligence?
The talk is in five parts: 1) How do we build systems that learn and generalize, from a perspective of probability and compression? Can we use these principles to resolve mysterious generalization behaviour in deep learning? 2) Is it possible to build general-purpose AI systems in light of results like the no free lunch theorems? 3) What are the prescriptions for general intelligence? 4) What are the demonstrations of those principles in scientific settings? 5) What are we far away from solving?
ICLR, How Far Are We from AGI?, Vienna, May 2024

Machine Learning is Linear Algebra
I talk about how our modelling assumptions manifest themselves as algebraic structure in a variety of settings, and how we can algorithmically exploit that structure for better scaling laws with transformers. At the end, I also discuss some principles for representing inductive biases more broadly -- embracing a notion of soft inductive biases over more traditional restriction biases.
Simons Institute, September 2024

Bayesian Deep Learning
A talk on Bayesian deep learning, including a philosophy for model construction, understanding loss surfaces, and a function space view of machine learning.
ICML 2020 Tutorial, Online

Informal Interviews
I discuss my general research interests, several projects, misconceptions about how deep learning works (!), personal background, teaching, advising, etc.
1. Anarchy Accelerometer with Matt Mirman (October 2023, NYC)
2. ICML Behind the Scenes Chats with Amin Karbasi (July 2024, Vienna)

Introduction to Bayesian Machine Learning
A whiteboard talk introducing the foundations of Bayesian machine learning. I open with some background on how I started, and how my thinking has evolved.
NYU AI Winter School 2021, Online

Scalable Gaussian Processes for Scientific Discovery
A seminar talk at EPFL. In this talk I introduce Gaussian processes, and outline a philosophy for model construction and scalable inference. I present several works, including spectral mixture kernels, Kronecker inference, and deep kernel learning, alongside scientific applications. Many of these methods are now available in the new GPyTorch library. Various tutorial resources are also available.
Lausanne, Switzerland, February 2016

Bayesian GAN
An invited talk on the Bayesian GAN, including an introduction to generative adversarial networks, and Bayesian deep learning, at the BIRS Workshop at the Interface of Machine Learning and Statistics.
Banff, Canada, January 2018

Bayesian Optimization with Gradients
A NIPS 2017 oral, presented jointly with Peter Frazier, on Bayesian optimization with gradients. This talk is about how to best exploit derivative information with Bayesian optimization.
Long Beach, USA, December 2017

Stochastic Variational Deep Kernel Learning
A short three minute video introducing our NIPS 2016 paper. This method is implemented in GPyTorch.
Montreal, Canada, December 2016

Other talks can be found linked in my paper list.