Assistant Professor of Linguistics, Data Science & Computer Science
Co-PI, ML² Group & CILVR Lab
New York University
PhD 2016, Stanford NLP Group & Stanford Linguistics
I study artificial neural network models for natural language understanding, with a focus on building high-quality training and evaluation data and on applying these models to scientific questions in syntax and semantics.
I'm also generally sympathetic toward effective altruism, and I'm part of Giving What We Can.
You're most likely to have encountered my group by way of our SNLI, MultiNLI datasets, our GLUE and SuperGLUE benchmark competitions, our jiant software toolkit, or our papers on topics like the inductive biases of language models or the viability of transfer learning for NLP.
If we haven't been in contact previously, please look through this FAQ before emailing me. You can reach me at firstname.lastname@example.org.
- This fall and winter, I'll be speaking at UT Austin, Tel Aviv University, The University of Pennsylvania, Georgia Tech, UChicago/TTIC, and the NeurIPS Workshop on Self-supervised Learning. See you there?
- I have new funding for data collection/crowdsourcing work from the NSF CAREER program and from Google's Collabs program. Thanks to everyone who reviewed the proposals!
- I'm presenting a position paper at at NAACL, speaking at the ACL workshop on Benchmarking, and co-presenting a tutorial at EMNLP on crowdsourcing.
- I joined the Department of Computer Science at NYU's Courant Institute as an associated faculty member, in addition to my primary appointment in Linguistics and Data Science.
- My co-PI on the ML² group, Tom Griffiths, and I were awarded a US$1.5M grant from the Open Philanthropy Project to support our work on ML-driven AI safety.
- I'm now writing a book with my colleague Carlos Diuk, on the theoretical foundations of learning to learn in machine learning.
- My paper "A Feature Pyramid Model for Natural Language Learning" was accepted to EMNLP.
- Those last three news items aren't real. They were generated by GPT-3, conditioning on the rest of this page, without cherry-picking.