Events
Reinforcement learning to align NLG learning objectives with desired model behaviors
Speaker: Shiyue Zhang
Location: 60 Fifth Avenue, Room 204
Date: Thursday, March 23, 2023
We have witnessed impressive progress in natural language generation (NLG) recently. However, reliability issues still remain. On the one hand, NLG models sometimes still show unwanted behaviors, for example, models may generate incoherent and repetitive text following a prompt. On the other hand, we face a dilemma when evaluating generation systems: human evaluation is desirable yet expensive and not reproducible, while automatic metrics are cheap and reproducible yet not always trustworthy. In this talk, I will present my past works that use reinforcement learning to align NLG learning objectives with desired model behaviors and combine human and automatic evaluations to find a good trade-off between both worlds.