Peter Carr Seminar Series: Optimization and Learning for Mean-Field Games via Occupation Measure

Speaker: Anran Hu

Location: TBA

Date: Wednesday, January 29, 2025

Mean-field games (MFGs) and multi-agent reinforcement learning (MARL) have become essential frameworks for analyzing interactions in large-scale systems. This talk presents recent advancements at the intersection of MFGs and MARL. We begin with a new framework MF-OMO (Mean-Field Occupation Measure Optimization), which reformulates Nash equilibria for discrete-time MFGs as a single optimization problem over occupation measures, offering a fresh characterization that enables the use of standard optimization algorithms to identify multiple equilibria without relying on restrictive assumptions. We also extend these results to continuous-time finite state MFGs. Building on the concept of occupation measures, we then introduce MF-OML (Mean-Field Occupation Measure Learning), the first fully polynomial online RL algorithm capable of finding approximate Nash equilibria in large population games beyond zero-sum and potential games. We establish regret bounds for the $N$-player games that can be approximated by MFGs under monotonicity conditions. Together, these advancements provide a comprehensive approach to characterizing and solving Nash equilibria in complex multi-agent environments.

Location: 5 MetroTech Center Room LC 400