Member of Technical Staff
Associate Professor of Linguistics, Data Science & Computer Science
Co-PI, Alignment Research Group, ML² Group & CILVR Lab
(on research leave Summer 2022–Summer 2024)
New York University
PhD 2016, Stanford NLP Group & Stanford Linguistics
At NYU, I study artificial neural network models for natural language understanding, with a focus on language model alignment, building high-quality training and evaluation data, and applying neural network models to scientific questions in syntax and semantics.
During an extended sabbatical-year research leave (2022–2024), I'm also leading a research group at Anthropic working on language model alignment and evaluation. I'm continuing to advise research at NYU during this time, but am not accepting new students.
I think you should join Giving What We Can.
If we haven't been in contact previously, please look through this FAQ before emailing me. You can reach me at email@example.com.
- I have a new slightly-opinionated survey paper on the current state of research on LLMs: Eight Things to Know about Large Language Models.
- I'm organizing a new AI safety research group at NYU, and I wrote up a blog post explaining what we're up to and why.
- I got tenure and was promoted to associate professor! Thanks to the many, many people who made this possible.
- I'm excited about our large specialist-written QuALITY (multiple-choice) and SQuALITY (long-text answer) benchmarks for machine reading comprehension over long input texts. Both are available now, and QuALITY was presented at NAACL.
- I have new funding for data collection/crowdsourcing work from the NSF CAREER program, from Google's Collabs program, and from Open Philanthropy. Thanks to everyone who reviewed the proposals!
- I joined the Department of Computer Science at NYU's Courant Institute as an associated faculty member, in addition to my primary appointment in Linguistics and Data Science.