CILVR SEMINAR: Taming Large Pre-Trained Neural Language Models: Differentiable Game-Theoretic Regularization and Sensitivity-Guided Optimization
Date:
Tuesday, March 29, 2022,
8PM
Location:
ONLI
Speaker:
Tuo Zhao, Georgia Tech
Pre-trained language models have fundamentally changed the landscape of NLP. However, as the pre-trained language models are becoming increasingly large, we have also witnessed that the gain in their generalization performance is becoming marginal, especially when we only have limited labelled data in downstream tasks. To improve their generalization, we propose a new framework for fine-tuning of pretrained models to yield better generalization performance.