Events
CILVR Seminar: You can just align things.
Speaker: Brian Cheung
Location:
60 Fifth Avenue, Room 125
Videoconference link:
https://nyu.zoom.us/j/96407242212
Date: Wednesday, October 22, 2025
Representation alignment is often treated as another optimization objective in the traditional deep-learning pipeline: you define a loss that encodes your alignment metric and then train or fine-tune a network end-to-end. In this talk, I argue that representations can be steered in alternative ways, breaking this cycle. These methods range from aligning models across different modalities without any training to observing significant benefits when aligning with a randomly initialized model. I’ll begin by showing that cross-modal alignment is achievable at inference time alone, without any training. I’ll also show that progressive alignment enables capabilities to remain intact as we transfer across very different architectures. In short, there are more ways to align than initially thought, so you should align things.
Bio: Brian Cheung is an incoming Assistant Professor in the Department of Bioengineering and Therapeutic Sciences at UCSF. Brian studies the convergence of representations across multiple levels: from the structural aspects of how intelligence is accomplished in biology and in silico, to the nature of how meaningful representations are generated from raw inputs, all the way to how these systems ultimately make decisions. Brian received his PhD from the Redwood Center for Theoretical Neuroscience at UC Berkeley, where he was advised by Bruno Olshausen. He is currently a postdoc in MIT’s Department of Brain and Cognitive Sciences and at CSAIL, working with Boris Katz, Tomaso Poggio, and Phillip Isola.
This is Brian’s 1-1s schedule during his visit tomorrow. Please add your name + meeting location if you want to chat with him: https://docs.google.com/spreadsheets/d/196JtJNh8wnQa3Qmvv1nowmBk649_Vrsw742L-PGLdsk/edit?usp=sharing (edited)