Molmo2: Open Weights and Data for Vision-Language Models with Video Understanding and Grounding

Speaker: Jieyu Zhang (UW)

Location: 60 Fifth Avenue, Room 527

Date: Thursday, January 15, 2026

Today’s strongest video-language models (VLMs) remain proprietary. The strongest open-weight models either rely on synthetic data from proprietary VLMs, effectively distilling from them, or do not disclose their training data or recipe. As a result, the open-source community lacks the foundations needed to improve on the state-of-the-art video (and image) language models. Crucially, many downstream applications require more than just high-level video understanding; they require grounding—either by pointing or by tracking in pixels. Even proprietary models lack this capability. We present Molmo2, a new family of VLMs that are state-of-the-art among open-source models and demonstrate exceptional new capabilities in point-driven grounding in single image, multi-image, and video tasks. Our key contribution is a collection of 7 new video datasets and 2 multi-image datasets, including a dataset of highly detailed video captions for pre-training, a free-form video Q&A dataset for fine-tuning, a new object tracking dataset with complex queries, and an innovative new video pointing dataset, all collected without the use of closed VLMs. We also present a training recipe for this data utilizing an efficient packing and message-tree encoding scheme, and show bi-directional attention on vision tokens and a novel token-weight strategy improves performance. Our best-in-class 8B model outperforms others in the class of open weight and data models on short videos, counting, and captioning, and is competitive on long-videos.Bio:
Jieyu Zhang is a final-year Ph.D. student in Computer Science at the University of Washington. His research focuses on multimodal learning and data-centric AI, particularly vision–language models, evaluation benchmarks, and weak supervision. His work has received multiple best paper and best paper runner-up awards at conference workshops, and he is a recipient of the Apple AI/ML PhD Fellowship and the OpenAI Superalignment Fellowship.