3D Gaussian Splatting has recently gained traction for its efficient training and real-time rendering. While the vanilla Gaussian Splatting representation is mainly designed for view synthesis, more recent works investigated how to extend it with scene understanding and language features. However, existing methods lack a detailed comprehension of scenes, limiting their ability to segment and interpret complex structures. To this end, We introduce SuperGSeg, a novel approach that fosters cohesive, context-aware scene representation by disentangling segmentation and language field distillation. SuperGSeg first employs neural Gaussians to learn instance and hierarchical segmentation features from multi-view images with the aid of off-the-shelf 2D masks. These features are then leveraged to create a sparse set of what we call Super-Gaussians. Super-Gaussians facilitate the distillation of 2D language features into 3D space. Through Super-Gaussians, our method enables high-dimensional language feature rendering without extreme increases in GPU memory. Extensive experiments demonstrate that SuperGSeg outperforms prior works on both open-vocabulary object localization and semantic segmentation tasks.
We introduce how Super-Gaussians group 3D Gaussians in 3D space. By employing graph-based connected component analysis, Super-Gaussians can be further organized into Instances and Parts of an Instance. As illustrated in the teaser, Super-Gaussians enable the learning of a language feature field for open-vocabulary query tasks. Leveraging Super-Gaussian-based Instances, we support both promptable and promptless instance segmentation. Additionally, using Super-Gaussian-based Parts allows for finer-grained hierarchical segmentation.
Given an arbitrary text query, SuperGSeg can directly segment 3D Gaussians in 3D space and render the corresponding masks from any viewpoint. SuperGSeg delivers segmentation with precise boundaries and reduced noise.
Given a visual prompt (e.g., a click) on a 2D image from any viewpoint, SuperGSeg can identify the 3D Gaussians corresponding to the clicked part in 3D space and render the part from any desired viewpoint (cross-frame). By learning both hierarchical and instance features, SuperGSeg enables the retrieval of the instance associated with the part for rendering (cross-level) and supports automatic part segmentation rendering.
@misc{liang2024supergsegopenvocabulary3dsegmentation,
title={SuperGSeg: Open-Vocabulary 3D Segmentation with Structured Super-Gaussians},
author={Siyun Liang and Sen Wang and Kunyi Li and Michael Niemeyer and Stefano Gasperini and Nassir Navab and Federico Tombari},
year={2024},
eprint={2412.10231},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.10231},
}