Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation

We present a novel semi-supervised semantic segmentation method which jointly achieves two desiderata of segmentation model regularities: the label-space consistency property between image augmentations and the feature-space contrastive property among different pixels. We leverage the pixel-level L2...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Zhong, Yuanyi, Bodi Yuan, Wu, Hong, Yuan, Zhiqiang, Peng, Jian, Yu-Xiong, Wang
Format Paper Journal Article
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 20.08.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We present a novel semi-supervised semantic segmentation method which jointly achieves two desiderata of segmentation model regularities: the label-space consistency property between image augmentations and the feature-space contrastive property among different pixels. We leverage the pixel-level L2 loss and the pixel contrastive loss for the two purposes respectively. To address the computational efficiency issue and the false negative noise issue involved in the pixel contrastive loss, we further introduce and investigate several negative sampling techniques. Extensive experiments demonstrate the state-of-the-art performance of our method (PC2Seg) with the DeepLab-v3+ architecture, in several challenging semi-supervised settings derived from the VOC, Cityscapes, and COCO datasets.
ISSN:2331-8422
DOI:10.48550/arxiv.2108.09025