3D Generation from Unstructured Single-view Data

Speaker

Yinghao Xu is a final-year Ph.D. student at Multimedia Lab (MMLab), Department of Information Engineering in The Chinese University of Hong Kong. His supervisor is Prof. Dahua Lin and Prof. Bolei Zhou. He is very interested in generative models and neural rendering, particularly in 3D generative models. During his Ph.D., he is fortunate to visit Stanford computational group, working with Prof. Gordon Wetzstein. Many of his papers have been awarded as oral representation and best paper candidate at CVPR, ECCV, NeurIPS and ICLR.

Abstract

Pixel-based content creation has made remarkable progress thanks to 2D generative models. However, a deeper understanding of the 3D world beyond image space is necessary for a wide range of real-world applications, such as AR and VR. Traditional 3D content creation authoring pipelines require professional expertise and significant financial investment to build large 3D datasets. In this talk, I will introduce our recent works in enabling 3D generative modeling from unstructured 2D images, especially in single-view data, by introducing powerful and effective 3D representations. It paves the way for generating high-quality 3D assets efficiently. Additionally, we also generalize the 3D generative model to in-the-wild objects and complex scenes, enabling 3D image generation on ImageNet and controllable 3D scene synthesis. These efforts are integral to our long-term vision of enabling high-quality, user-friendly 3D content creation for a broad audience.

Video

Coming soon. Stay tuned. :-)