Drivable Dirt Road Region Identification Using Image and Point Cloud Semantic Segmentation Fusion
Driving scene understanding is an essential technique for realizing autonomous driving. Although many large-scale datasets for autonomous driving have enabled studies in terms of driving scene understanding, unpaved dirt roads have not been considered as their target driving environment. Drivable re...
Saved in:
Published in | IEEE transactions on intelligent transportation systems Vol. 23; no. 8; pp. 13203 - 13216 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.08.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Driving scene understanding is an essential technique for realizing autonomous driving. Although many large-scale datasets for autonomous driving have enabled studies in terms of driving scene understanding, unpaved dirt roads have not been considered as their target driving environment. Drivable region identification is important in dirt roads to prevent damage to the vehicle and passengers. Semantic segmentation of camera images and lidar point clouds have been used to recognize the environment around an autonomous vehicle. The road and objects are recognized by classifying image pixels and point clouds into semantic classes. In this study, we introduce a perception method for drivable region identification on dirt roads through fusion of image and point cloud semantic segmentation. Our approach includes an image semantic segmentation algorithm and a point cloud semantic segmentation algorithm, which we combine into a bird's-eye-view grid map format. To transform the point-wise drivable region identification results into the area-wise information, we adopt the alphashape algorithm. Speed improvements are made by using small proportion of drivable points, and the accuracy degradation is compensated by accumulating the perception results along the time. |
---|---|
ISSN: | 1524-9050 1558-0016 |
DOI: | 10.1109/TITS.2021.3121710 |