Towards Understanding and Modeling the Interaction with Large Language Models

Speaker: Ziyu Yao

Location: 60 Fifth Avenue, Room Room 204
Videoconference link: https://nyu.zoom.us/j/4202785259

Date: Wednesday, May 22, 2024

While NLP systems in the past were typically non-interactive or would need non-trivial effort to become interactive, large language models (LLMs) have ushered us into a new paradigm. Today, human users naturally expect and conduct multi-turn conversations with LLM-powered NLP systems, and these systems can have interactions among themselves as well. In my talk, I will present some of the explorations in my group under this topic. In the first project, we study human feedback in the task of semantic parsing (e.g., text-to-code generation). We found it remains challenging even for LLM-based semantic parsers and will discuss the promise of building user simulators for better feedback modeling. In the second project, we explore how a group of LLM agents can be formulated with different characteristics and interact with each other. We focus on its application to enhance Education, specifically by simulating multiple virtual students discussing and collaboratively solving mathematics problems with the human student. Under this intriguing yet challenging setup, we discuss two critical alignment problems and our solutions. Finally, I will conclude the talk by summarizing other research problems about interaction with LLMs.