Novel view synthesis (NVS) from 2D images aims to generate unseen views of a scene given multiple input observations. It is a fundamental task in computer vision that has garnered significant attention due to recent advances in 3D representations and neural rendering. Techniques such as Neural Radiance Fields and 3D Gaussian Splatting have substantially improved NVS quality, yet the demand for more efficient approaches—in terms of space, time, and storage—remains a critical research direction. In this talk, I will present our recent efforts to address these challenges. Specifically, I will discuss two of our latest works: FCGS (ICLR 2025), which introduces a compression method for 3D Gaussian Splatting, and Pansplat (CVPR 2025), a feed-forward model designed for panoramic novel view synthesis.
Speaker Bio
Qianyi Wu is a final-year PhD at Department of Data Science and AI, Monash University, under the supervision of Prof.Jianfei Cai. Qianyi received B.S. degree in Special Class for the Gifted Youth at University of Science and Technology of China (USTC) in 2016. He received M.Sc degree from Graphics and Geometric Computing Laboratory of the School of Mathematical Sciences at USTC in 2019, under the supervision of Prof. Juyong Zhang. He worked as a research scientist intern at Meta Reality Lab in 2024. His recent research area focuses on 3D reconstruction and generation.
More Details
- When: May 23 May 2025, at 3-4pm (Brisbane time)
- Speaker: Qianyi Wu (Monash University)
- Host: Dr Yujun Cai
- Zoom: https://uqz.zoom.us/j/88094383147