CS Colloquium: Learning 3D representations with minimal supervision

Speaker: Yue Wang

Location: TBA
Videoconference link: https://nyu.zoom.us/j/94325731588

Date: Tuesday, March 29, 2022

Deep learning has demonstrated considerable success embedding
images and more general 2D representations into compact feature spaces
for downstream tasks like recognition, registration, and generation.
Learning from 3D data, however, is the missing piece needed for embodied
agents to perceive their surrounding environments. To bridge the gap
between 3D perception and robotic intelligence, my present efforts focus
on learning 3D representations with minimal supervision from a geometry
perspective. In this talk, I will discuss two key aspects to reduce the amount of
human supervision in current 3D deep learning algorithms. First, I will
talk about how to leverage geometry of point clouds and incorporate such
inductive bias into point cloud learning pipelines. These learning
models can be used to tackle object recognition problems and point cloud
registration problems. Second, I will present our works on leveraging
natural supervision in point clouds to perform self-supervised learning.
In addition, I will discuss how these 3D learning algorithms enable
human-level perception for robotic applications such as self-driving
cars. Finally, the talk will conclude with a discussion about future
inquiries to design complete and active 3D learning systems.