Assistant Professor of Linguistics, Data Science & Computer Science
Co-PI, ML² Group & CILVR Lab
New York University
PhD 2016, Stanford NLP Group & Stanford Linguistics
I study artificial neural network models for natural language understanding, with a focus on building high-quality training and evaluation data and on applying these models to scientific questions in syntax and semantics.
I'm also generally sympathetic toward effective altruism, and I'm part of Giving What We Can.
You're most likely to have encountered my group by way of our SNLI, MultiNLI datasets, our GLUE and SuperGLUE benchmark competitions, our jiant software toolkit, or our papers on topics like the inductive biases of language models or the viability of transfer learning for NLP.
If we haven't been in contact previously, please look through this FAQ before emailing me. You can reach me at email@example.com.
- I'll be on sabbatical leave and away from NYU from Summer 2022 through Summer 2023 while taking on a visiting researcher role at Anthropic in San Francisco. I'll still be advising research students, but my availability for other service will be very limited.
- This fall and winter, I'll be speaking at UT Austin, Tel Aviv University, The University of Pennsylvania, University College London, Georgia Tech, UChicago/TTI-C, Technion, Hebrew University, Bar Ilan University, Carnegie Mellon University, Unbabel/IST, the NeurIPS Workshop on Self-supervised Learning. I'll also be speaking on a NeurIPS plenary panel on benchmarking. See you there?
- I have new funding for data collection/crowdsourcing work from the NSF CAREER program and from Google's Collabs program. Thanks to everyone who reviewed the proposals!
- I'm presenting a position paper at at NAACL, speaking at the ACL workshop on Benchmarking, and co-presenting a tutorial at EMNLP on crowdsourcing.
- I joined the Department of Computer Science at NYU's Courant Institute as an associated faculty member, in addition to my primary appointment in Linguistics and Data Science.
- My co-PI on the ML² group, Tom Griffiths, and I were awarded a US$1.5M grant from the Open Philanthropy Project to support our work on ML-driven AI safety.
- I'm now writing a book with my colleague Carlos Diuk, on the theoretical foundations of learning to learn in machine learning.
- My paper "A Feature Pyramid Model for Natural Language Learning" was accepted to EMNLP.
- Those last three news items aren't real. They were generated by GPT-3, conditioning on an earlier version of this page, without cherry-picking.