Semantic segmentation based on aggregated features and contextual information

In this paper, a novel semantic segmentation model based on aggregated features and contextual information is proposed. Given an RGB-D image, we train a support vector machine (SVM) to predict initial labels using aggregated features, and then optimize the predicted results using contextual informat...

Full description

Saved in:
Bibliographic Details
Published in2016 IEEE International Conference on Robotics and Biomimetics (ROBIO) pp. 862 - 867
Main Authors Chuanxia Zheng, Jianhua Wang, Weihai Chen, Xingming Wu
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.12.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, a novel semantic segmentation model based on aggregated features and contextual information is proposed. Given an RGB-D image, we train a support vector machine (SVM) to predict initial labels using aggregated features, and then optimize the predicted results using contextual information. For aggregated features, the local features on regions are extracted to capture visual appearance of object, and the global features are exploited to represent scene information such that the proposed model can utilize more discriminative features. For contextual information, a novel multi-label conditional random field (CRF) model is constructed to jointly optimize the initial semantic and attribute predicted results. The experimental results on the public NYU v2 dataset demonstrate the proposed model outperforms the existing state-of-the-art methods on a challenging 40 classes task, yielding a higher mean IU accuracy of 33.7% and pixel average accuracy of 64.1%. Especially, the prediction accuracy of "small" classes has been improved significantly.
DOI:10.1109/ROBIO.2016.7866432