I study artificial neural network models for natural language understanding, with a focus on building high-quality training and evaluation data, and on applying these models to scientific questions in syntax and semantics.
If we haven't been in contact previously, please look through this FAQ before emailing me. You can reach me at firstname.lastname@example.org.
My group had two papers accepted at NeurIPS: A paper describing the new SuperGLUE benchmark (with a spotlight talk!), and an analysis paper with Nishant Subramani on language model latent spaces.
An analysis paper by my fifteen-person NYU Linguistics seminar was accepted to EMNLP. tl;dr: Don't be too confident about the big-picture points that you conclude from any single model analysis experiment.
A paper with Najoung Kim et al. from our JSALT 2018 collaboration won best paper at *SEM.