T/TT Director, Center for AI and Deep Learning Seminar: Mathematical and Algorithmic Innovations in AI: From Preference Learning to Language Models

Speaker: Yuan Yao

Location: 60 Fifth Avenue, Room 7th floor open space

Date: Monday, December 9, 2024

This talk delves into some advanced mathematical frameworks and algorithmic innovations that drive the development of more reliable, efficient, and robust AI systems: 1) Topological and Geometric Preference Learning via Hodge Theory: We introduce a novel approach to preference learning using Hodge theory, employing topological and geometric tools to decompose preferences into consistent and cyclic components. Applications in AI, including recommendation systems and visual in-context learning, will be discussed. 2) Beyond Gradient Descent: We explore alternative optimization techniques such as mirror descent, ADMM, and Block-Coordinate Descent (BCD). These methods address some challenges in training deep neural networks, offering enhanced interpretability and efficiency. 3) Learning Dynamics and Multi-Modalities in Large Language Models (LLMs) for Sciences: We investigate how LLMs process and integrate multi-modal data—such as graphs, images, and sequences—to learn dynamics and chemical reactions, with a focus on applications in chemistry and physics. By examining these topics, this talk showcases cutting-edge methodologies at the intersection of mathematics, optimization, and AI, paving the way for transformative advancements across a variety of scientific domains.