Basics
Visiting Researcher (Sabbatical Year 2022–2023)
Anthropic
Assistant Professor of Linguistics, Data Science & Computer Science
Co-PI, ML² Group & CILVR Lab
New York University
PhD 2016, Stanford NLP Group & Stanford Linguistics
Interests
I study artificial neural network models for natural language understanding, with a focus on building high-quality training and evaluation data, applying these models to scientific questions in syntax and semantics, and contributing to work on language model alignment and control.
I think you should join Giving What We Can.
Impact
You're most likely to have encountered my group by way of our SNLI, MultiNLI datasets, our GLUE and SuperGLUE benchmark competitions, our jiant software toolkit, or our papers on topics like the inductive biases of language models or the viability of transfer learning for NLP.
Contact
If we haven't been in contact previously, please look through this FAQ before emailing me. You can reach me at bowman@nyu.edu.
News
- I got tenure, and will be an associate professor as of fall! Thanks to the many, many people who made this possible.
- I have new major funding for work on language model alignment, and my group is hiring for both postdoc and junior researcher positions on the project.
- I'm excited about our large specialist-written QuALITY (multiple-choice) and SQuALITY (long-text answer) benchmarks for machine reading comprehension over long input texts. Both are available now, and QuALITY will be presented at NAACL.
- I'll be on leave and away from NYU for a sabbatical year from Summer 2022 through Summer 2023 while taking on a visiting researcher role at Anthropic in San Francisco. I'll still be advising researchers at NYU, but my availability for other service will be very limited.
- I have new funding for data collection/crowdsourcing work from the NSF CAREER program and from Google's Collabs program. Thanks to everyone who reviewed the proposals!
- I joined the Department of Computer Science at NYU's Courant Institute as an associated faculty member, in addition to my primary appointment in Linguistics and Data Science.
- Our paper "SuperGLUE: A Large-Scale Evaluation of Semi-Supervised Learning for Natural Language Understanding" won the best paper award at ACL 2019.
- MultiNLI, the large-scale multilingual parallel corpus we released in late 2017, was featured in a Nature article.
- I gave a keynote talk on "Data-Driven Language Learning" at the first joint conference of the ACL and the EACL.
- Our paper on the inductive biases of language models (co-authored with Yoshua Bengio, Aaron Courville, and Pascal Vincent) won the best paper award at EMNLP 2016.
- Those last four news items aren't real. They were generated by GPT-3, conditioning on an earlier version of this page, without cherry-picking.