In this work, we introduce OpenIns3D, a new framework for 3D open-vocabulary scene understanding that requires no aligned images as input. π
The OpenIns3D framework employs a "Mask-Snap-Lookup" (π-π·-π) scheme. The "Mask" module π learns class-agnostic mask proposals in 3D point clouds, The "Snap" module π· generates synthetic scene-level images at multiple scales and leverages 2D vision-language models to extract interesting objects, and the "Lookup" module π searches through the outcomes of βSnap'' to assign category names to the proposed masks.
This approach, yet simple, achieves state-of-the-art performance across a wide range of 3D open-vocabulary tasks, including recognition, object detection, and instance segmentation, on both indoor and outdoor datasets.
Moreover, OpenIns3D facilitates effortless switching between different 2D detectors without requiring retraining. When integrated with powerful 2D open-world models, it achieves excellent results in scene understanding tasks.
Furthermore, when combined with LLM-powered 2D models, OpenIns3D exhibits an impressive capability to comprehend and process highly complex text queries that demand intricate reasoning and real-world knowledge.