Two Stream 3D Semantic Scene Completion

10 Apr 2018  ·  Martin Garbade, Yueh-Tung Chen, Johann Sawatzky, Juergen Gall ·

Inferring the 3D geometry and the semantic meaning of surfaces, which are occluded, is a very challenging task. Recently, a first end-to-end learning approach has been proposed that completes a scene from a single depth image. The approach voxelizes the scene and predicts for each voxel if it is occupied and, if it is occupied, the semantic class label. In this work, we propose a two stream approach that leverages depth information and semantic information, which is inferred from the RGB image, for this task. The approach constructs an incomplete 3D semantic tensor, which uses a compact three-channel encoding for the inferred semantic information, and uses a 3D CNN to infer the complete 3D semantic tensor. In our experimental evaluation, we show that the proposed two stream approach substantially outperforms the state-of-the-art for semantic scene completion.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Semantic Scene Completion NYUv2 TS3D mIoU 34.1 # 10
3D Semantic Scene Completion SemanticKITTI TS3D (Reported in SemanticKITTI dataset paper) mIoU 9.5 # 17
3D Semantic Scene Completion SemanticKITTI TS3D+DNet (Reported in SemanticKITTI dataset paper) mIoU 10.2 # 16
3D Semantic Scene Completion SemanticKITTI TS3D+DNet+SATNet (Reported in SemanticKITTI dataset paper) mIoU 17.7 # 7

Methods


No methods listed for this paper. Add relevant methods here