← Previous · All Episodes · Next →
Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction Episode 197

Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction

· 22:43

|

🤗 Upvotes: 11 | cs.CV, cs.GR

Authors:
Seungtae Nam, Xiangyu Sun, Gyeongjin Kang, Younggeun Lee, Seungjun Oh, Eunbyung Park

Title:
Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction

Arxiv:
http://arxiv.org/abs/2412.06234v2

Abstract:
Generalized feed-forward Gaussian models have achieved significant progress in sparse-view 3D reconstruction by leveraging prior knowledge from large multi-view datasets. However, these models often struggle to represent high-frequency details due to the limited number of Gaussians. While the densification strategy used in per-scene 3D Gaussian splatting (3D-GS) optimization can be adapted to the feed-forward models, it may not be ideally suited for generalized scenarios. In this paper, we propose Generative Densification, an efficient and generalizable method to densify Gaussians generated by feed-forward models. Unlike the 3D-GS densification strategy, which iteratively splits and clones raw Gaussian parameters, our method up-samples feature representations from the feed-forward models and generates their corresponding fine Gaussians in a single forward pass, leveraging the embedded prior knowledge for enhanced generalization. Experimental results on both object-level and scene-level reconstruction tasks demonstrate that our method outperforms state-of-the-art approaches with comparable or smaller model sizes, achieving notable improvements in representing fine details.


Subscribe

Listen to Daily Paper Cast using one of many popular podcasting apps or directories.

Apple Podcasts Spotify Overcast Pocket Casts
← Previous · All Episodes · Next →