Adaptive Active Fusion of Camera and Single-Point LiDAR for Depth Estimation
Depth sensing is an important problem in many applications, such as autonomous driving, robotics, and automation. This article presents an adaptive active fusion method for scene depth estimation by using a camera and a single-point light detection and ranging (LiDAR) sensor. An active scanning mech...
Saved in:
Published in | IEEE transactions on instrumentation and measurement Vol. 72; pp. 1 - 9 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Depth sensing is an important problem in many applications, such as autonomous driving, robotics, and automation. This article presents an adaptive active fusion method for scene depth estimation by using a camera and a single-point light detection and ranging (LiDAR) sensor. An active scanning mechanism is proposed to guide laser scanning based on critical visual and saliency features, and the convolutional spatial propagation network (CSPN) is designed to generate and refine full depth map from the sparse depth scans. The active scanning mechanism generates a depth mask by using log-spectrum saliency detection, Canny edge detection, and uniform sampling, which indicate critical regions that require a high resolution of laser scanning. To reconstruct a full depth map, the designed CSPN network extracts affinity matrices from the sparse depth scans, while reserving global spatial information in the images. The performance of proposed method was evaluated and compared with the state-of-the-art methods on the NYU depth dataset v2 (NYUv2) and the experiment demonstrated its outperformance in reconstruction accuracy and robustness to measurement noise. The proposed method was also evaluated in real-world scenarios. |
---|---|
ISSN: | 0018-9456 1557-9662 |
DOI: | 10.1109/TIM.2023.3284129 |