SnapNet: 3D point cloud semantic labeling with 2D deep segmentation networks

•Scalable method using voxelization to produce smaller and homogeneous point clouds.•Snapshots of the point cloud for 2D, two modalities RGB and geometric features.•Segmentation using deep segmentation network.•First place on Lidar benchmark Semantic8.•Experimentation on photogrammetric and RGBD dat...

Full description

Saved in:
Bibliographic Details
Published inComputers & graphics Vol. 71; pp. 189 - 198
Main Authors Boulch, Alexandre, Guerry, Joris, Le Saux, Bertrand, Audebert, Nicolas
Format Journal Article
LanguageEnglish
Published Oxford Elsevier Ltd 01.04.2018
Elsevier Science Ltd
Elsevier
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:•Scalable method using voxelization to produce smaller and homogeneous point clouds.•Snapshots of the point cloud for 2D, two modalities RGB and geometric features.•Segmentation using deep segmentation network.•First place on Lidar benchmark Semantic8.•Experimentation on photogrammetric and RGBD data. [Display omitted] In this work, we describe a new, general, and efficient method for unstructured point cloud labeling. As the question of efficiently using deep Convolutional Neural Networks (CNNs) on 3D data is still a pending issue, we propose a framework which applies CNNs on multiple 2D image views (or snapshots) of the point cloud. The approach consists in three core ideas. (i) We pick many suitable snapshots of the point cloud. We generate two types of images: a Red-Green-Blue (RGB) view and a depth composite view containing geometric features. (ii) We then perform a pixel-wise labeling of each pair of 2D snapshots using fully convolutional networks. Different architectures are tested to achieve a profitable fusion of our heterogeneous inputs. (iii) Finally, we perform fast back-projection of the label predictions in the 3D space using efficient buffering to label every 3D point. Experiments show that our method is suitable for various types of point clouds such as Lidar or photogrammetric data.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0097-8493
1873-7684
DOI:10.1016/j.cag.2017.11.010