Learning to Optimize: A Gentle Introduction

Speaker: Professor Zhangyang “Atlas” Wang

Location: On-Line
Videoconference link: https://nyu.zoom.us/j/92253500145?pwd=OFdWZU9zMEsrc2RmR0RFUlNSemJZQT09

Date: Wednesday, July 6, 2022

Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods, aiming at reducing the laborious iterations of hand engineering. It automates the design of an optimization method based on its performance on a set of training problems. This data-driven procedure generates methods that can efficiently solve problems similar to those in the training. In sharp contrast, the typical and traditional designs of optimization methods are theory-driven, so they obtain performance guarantees over the classes of problems specified by the theory. The difference makes L2O suitable for repeatedly solving a certain type of optimization problems over a specific distribution of data, while it typically fails on out-of-distribution problems. The practicality of L2O depends on the type of target optimization, the chosen architecture of the method to learn, and the training procedure. This new paradigm has motivated a community of researchers to explore L2O and report their findings. In this talk, I'll give an informal overview of this frontier: set up taxonomies, categorize existing works and research directions, present insights, identify open challenges, and benchmarking user cases. I'll also try to discuss how L2O can be meaningful in solving real computer vision and robotics problems.Reference: https://arxiv.org/pdf/2103.12828.pdf (JMLR 2022)