Events
Taking a Big Step: Large Learning Rates in Denoising Score Matching Prevent Memorization
Speaker: Pierre Marion
Location: 60 Fifth Avenue, Room 150
Date: Thursday, June 26, 2025
Abstract:
Denoising score matching plays a pivotal role in the performance of diffusion-based generative models. However, the empirical optimal score--the exact solution to the denoising score matching--leads to memorization, where generated samples replicate the training data. Yet, in practice, only a moderate degree of memorization is observed, even without explicit regularization. In this paper, we investigate this phenomenon by uncovering an implicit regularization mechanism driven by large learning rates. Specifically, we show that in the small-noise regime, the empirical optimal score exhibits high irregularity. We then prove that, when trained by stochastic gradient descent with a large enough learning rate, neural networks cannot stably converge to a local minimum with arbitrarily small excess risk. Consequently, the learned score cannot be arbitrarily close to the empirical optimal score, thereby mitigating memorization. To make the analysis tractable, we consider one-dimensional data and two-layer neural networks. Experiments validate the crucial role of the learning rate in preventing memorization, even beyond the one-dimensional setting.
Bio:
Pierre has been a postdoc researcher at EPFL under the supervision of Lénaïc Chizat since January 2024. He is currently a research fellow at UC Berkeley, for the Spring 2025 semester on Transformers and LLMs at the Simons Institute. Before that, he was a PhD student at Sorbonne Université under the supervision of Gérard Biau and Jean-Philippe Vert. His current research interests regard the theory of deep learning. He is interested in various architectures, ranging from shallow networks to generative models, and tries to understand their optimization
and statistical properties.