SuperGSeg: Open-Vocabulary 3D Segmentation with Structured Super-Gaussians

1Technical University of Munich, 2Google, 3Munich Center for Machine Learning, 4Visualais
*Equal Contribution
Teaser

TL;DR. SuperGSeg clusters similar Gaussians into “Super-Gaussians,” merging diverse features for rich 3D scene understanding. It supports open-vocabulary semantic segmentation, promptable/promptless instance segmentation, and hierarchical segmentation.

Abstract

3D Gaussian Splatting has recently gained traction for its efficient training and real-time rendering. While the vanilla Gaussian Splatting representation is mainly designed for view synthesis, more recent works investigated how to extend it with scene understanding and language features. However, existing methods lack a detailed comprehension of scenes, limiting their ability to segment and interpret complex structures. To this end, We introduce SuperGSeg, a novel approach that fosters cohesive, context-aware scene representation by disentangling segmentation and language field distillation. SuperGSeg first employs neural Gaussians to learn instance and hierarchical segmentation features from multi-view images with the aid of off-the-shelf 2D masks. These features are then leveraged to create a sparse set of what we call Super-Gaussians. Super-Gaussians facilitate the distillation of 2D language features into 3D space. Through Super-Gaussians, our method enables high-dimensional language feature rendering without extreme increases in GPU memory. Extensive experiments demonstrate that SuperGSeg outperforms prior works on both open-vocabulary object localization and semantic segmentation tasks.

Method overview

We initialize the 3D Gaussians from a sparse set of anchor points, each generating k Gaussians with corresponding attributes. First, we train the appearance and segmentation features using RGB images and segmentation masks generated by SAM. Next, we use the segmentation features and their spatial positions to produce a sparse set of Super-Gaussians, each carrying a 512-dimensional language feature. Finally, we train this high-dimensional language feature using a 2D feature map from CLIP.

Modules of Super-Gaussians Generation

Method overview

We implement three mapping functions as MLPs. Each function is tailored to a specific difference in attributes, capturing complex and nuanced information about the relationship between an anchor and its k-nearest Super-Gaussians independently and encoding the relevancy of that attribute (coordinate, segmentation, or geometry) for the Super-Gaussian assignment into a feature embedding. The final MLP takes the concatenation of three embeddings as input to integrate the spatial, semantic, and geometric differences into probalistic assignment.

Visualization of Super-Gaussians

super-gaussian

Super-Gaussians

super-gaussian to instance

Merge Super-Gaussians to Instances

super-gaussian to part

Merge Super-Gaussians to Parts per Instance

We introduce how Super-Gaussians group 3D Gaussians in 3D space. By employing graph-based connected component analysis, Super-Gaussians can be further organized into Instances and Parts of an Instance. As illustrated in the teaser, Super-Gaussians enable the learning of a language feature field for open-vocabulary query tasks. Leveraging Super-Gaussian-based Instances, we support both promptable and promptless instance segmentation. Additionally, using Super-Gaussian-based Parts allows for finer-grained hierarchical segmentation.

Open-Vocabulary Object Selection

Given an arbitrary text query, SuperGSeg can directly segment 3D Gaussians in 3D space and render the corresponding masks from any viewpoint. SuperGSeg delivers segmentation with precise boundaries and reduced noise.

Click-based Part/Object Selection

Given a visual prompt (e.g., a click) on a 2D image from any viewpoint, SuperGSeg can identify the 3D Gaussians corresponding to the clicked part in 3D space and render the part from any desired viewpoint (cross-frame). By learning both hierarchical and instance features, SuperGSeg enables the retrieval of the instance associated with the part for rendering (cross-level) and supports automatic part segmentation rendering.

Results on Pixel-wise Open-Vocabulary Semantic Segmentation

Method overview

Qualitative results on the ScanNet dataset.

Results on ScanNet scene0000_00 pixel-wise open-vocabulary semantic segmentation. Compared with LangSplat

BibTeX

@misc{liang2024supergsegopenvocabulary3dsegmentation,
  title={SuperGSeg: Open-Vocabulary 3D Segmentation with Structured Super-Gaussians}, 
  author={Siyun Liang and Sen Wang and Kunyi Li and Michael Niemeyer and Stefano Gasperini and Nassir Navab and Federico Tombari},
  year={2024},
  eprint={2412.10231},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2412.10231}, 
}
View My Stats