Convolutional feature masking for joint object and stuff segmentation

The topic of semantic segmentation has witnessed considerable progress due to the powerful features learned by convolutional neural networks (CNNs) [13]. The current leading approaches for semantic segmentation exploit shape information by extracting CNN features from masked image regions. This stra...

Full description

Saved in:
Bibliographic Details
Published in2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 3992 - 4000
Main Authors Dai, Jifeng, He, Kaiming, Sun, Jian
Format Conference Proceeding Journal Article
LanguageEnglish
Published IEEE 01.06.2015
Subjects
Online AccessGet full text
ISSN1063-6919
1063-6919
2575-7075
DOI10.1109/CVPR.2015.7299025

Cover

Loading…
More Information
Summary:The topic of semantic segmentation has witnessed considerable progress due to the powerful features learned by convolutional neural networks (CNNs) [13]. The current leading approaches for semantic segmentation exploit shape information by extracting CNN features from masked image regions. This strategy introduces artificial boundaries on the images and may impact the quality of the extracted features. Besides, the operations on the raw image domain require to compute thousands of networks on a single image, which is time-consuming. In this paper, we propose to exploit shape information via masking convolutional features. The proposal segments (e.g., super-pixels) are treated as masks on the convolutional feature maps. The CNN features of segments are directly masked out from these maps and used to train classifiers for recognition. We further propose a joint method to handle objects and "stuff" (e.g., grass, sky, water) in the same framework. State-of-the-art results are demonstrated on benchmarks of PASCAL VOC and new PASCAL-CONTEXT, with a compelling computational speed.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Conference-1
ObjectType-Feature-3
content type line 23
SourceType-Conference Papers & Proceedings-2
ISSN:1063-6919
1063-6919
2575-7075
DOI:10.1109/CVPR.2015.7299025