Real-Time Fusion Network for RGB-D Semantic Segmentation Incorporating Unexpected Obstacle Detection for Road-Driving Images

Semantic segmentation has made striking progress due to the success of deep convolutional neural networks. Considering the demands of autonomous driving, real-time semantic segmentation has become a research hotspot these years. However, few real-time RGB-D fusion semantic segmentation studies are c...

Full description

Saved in:
Bibliographic Details
Published inIEEE robotics and automation letters Vol. 5; no. 4; pp. 5558 - 5565
Main Authors Sun, Lei, Yang, Kailun, Hu, Xinxin, Hu, Weijian, Wang, Kaiwei
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.10.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Semantic segmentation has made striking progress due to the success of deep convolutional neural networks. Considering the demands of autonomous driving, real-time semantic segmentation has become a research hotspot these years. However, few real-time RGB-D fusion semantic segmentation studies are carried out despite readily accessible depth information nowadays. In this letter, we propose a real-time fusion semantic segmentation network termed RFNet that effectively exploits complementary cross-modal information. Building on an efficient network architecture, RFNet is capable of running swiftly, which satisfies autonomous vehicles applications. Multi-dataset training is leveraged to incorporate unexpected small obstacle detection, enriching the recognizable classes required to face unforeseen hazards in the real world. A comprehensive set of experiments demonstrates the effectiveness of our framework. On Cityscapes, Our method outperforms previous state-of-the-art semantic segmenters, with excellent accuracy and 22 Hz inference speed at the full 2048 × 1024 resolution, outperforming most existing RGB-D networks.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2020.3007457