A Model for Urban Environment Instance Segmentation with Data Fusion

Fine-grained urban environment instance segmentation is a fundamental and important task in the field of environment perception for autonomous vehicles. To address this goal, a model was designed with LiDAR pointcloud data and camera image data as the subject of study, and the reliability of the mod...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 23; no. 13; p. 6141
Main Authors Du, Kaiyue, Meng, Jin, Meng, Xin, Wang, Shifeng, Yang, Jinhua
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 04.07.2023
MDPI
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Fine-grained urban environment instance segmentation is a fundamental and important task in the field of environment perception for autonomous vehicles. To address this goal, a model was designed with LiDAR pointcloud data and camera image data as the subject of study, and the reliability of the model was enhanced using dual fusion at the data level and feature level. By introducing the Markov Random Field algorithm, the Support Vector Machine classification results were optimized according to the spatial contextual linkage while providing the model with the prerequisite of the differentiation of similar but foreign objects, and the object classification and instance segmentation of 3D urban environments were completed by combining the Mean Shift. The dual fusion approach in this paper is a method for the deeper fusion of data from different sources, and the model, designed more accurately, describes the categories of items in the environment with a classification accuracy of 99.3%, and segments the different individuals into groups of the same kind of objects without instance labels. Moreover, our model does not have high computational resource and time cost requirements, and is a lightweight, efficient, and accurate instance segmentation model.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s23136141