Events
Demystifying Prompts in Large Language Models
Speaker: Hila Gonen
Location: 60 Fifth Avenue, Room 204
Date: Monday, November 14, 2022
Prompting is becoming increasingly popular in various LM-based NLP tasks. While rigorous understanding of effective prompting is still lacking, it is established that prompts vary considerably in their performance. In this work, we analyze the root causes of this variance and demystify this enigma by experimentally establishing our hypothesis: the performance of the prompt is coupled with the extent to which the model is familiar with it. Over a wide range of tasks, we show that the lower the perplexity of the prompt is, the better the prompt is able to perform the task. As a result, we devise design guidelines for creating better prompts using this insight—we automatically extend a small seed set of manually written prompts by paraphrasing using GPT3 and backtranslation and choose the lowest perplexity prompts among them to get significant gains in performance over the manual prompts.
Time permitting, I will continue with the line of better understanding large language models by presenting a thorough analysis of the training dynamics of multilingual models across time, revealing some interesting behaviours.