Target Detection for Construction Machinery Based on Deep Learning and Multisource Data Fusion
Target detection in use-case environments is a challenging task, which is influenced by complex and dynamic landscapes, illumination, and vibrations. Therefore, this article presents a research on road target detection based on deep learning by combining image data of vision with point cloud data fr...
Saved in:
Published in | IEEE sensors journal Vol. 23; no. 10; pp. 11070 - 11081 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
15.05.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Target detection in use-case environments is a challenging task, which is influenced by complex and dynamic landscapes, illumination, and vibrations. Therefore, this article presents a research on road target detection based on deep learning by combining image data of vision with point cloud data from light detection and ranging (LiDAR). First, the depth map of the point cloud was densified for the sparse and disordered characteristics of the point cloud, with the ground removed to create an image dataset that could serve for training. Next, considering the computational capacity of the on-board processor and the accuracy, the MY3Net network is designed by integrating Mobilenet v2, a lightweight network, as the feature extractor, and you only look once (YOLO) v3, a high-precision network, as the multiscale target detector, to implement the detection of red green blue (RGB) images and the densified depth maps. Finally, a decision-level fusion model is proposed to integrate the detection results of RGB images and depth maps with dynamic weights. Experimental results show that the proposed approach offers high detection accuracy even under complex illumination conditions. |
---|---|
ISSN: | 1530-437X 1558-1748 |
DOI: | 10.1109/JSEN.2023.3264526 |