Self-supervised sub-category exploration for Pseudo label generation

Image segmentation-based applications have been actively investigated. However, it is non-trivial to prepare polygon annotations. Previous studies suggested pseudo label generation methods based on weakly supervised learning to lessen the burden of annotation. Nevertheless, the quality of pseudo lab...

Full description

Saved in:
Bibliographic Details
Published inAutomation in construction Vol. 151; p. 104862
Main Authors Chern, Wei-Chih, Kim, Taegeon, Nguyen, Tam V., Asari, Vijayan K., Kim, Hongjo
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.07.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Image segmentation-based applications have been actively investigated. However, it is non-trivial to prepare polygon annotations. Previous studies suggested pseudo label generation methods based on weakly supervised learning to lessen the burden of annotation. Nevertheless, the quality of pseudo labels could not be ideal due to target object characteristics and insufficient data size in the construction domain, as identified in this study. This study proposes a fusion architecture, SESC-CAM, to address the challenge, building upon weakly and self-supervised learning methods. The proposed architecture was validated on the AIM dataset, and the generated pseudo labels recorded a mIoU score of 64.99% and 67.65% after the refinement by using a conditional random field, and outperformed its predecessors by 11.29% and 9.14%. The refined pseudo labels were used to train a segmentation model and recorded a 74% mIoU score in semantic segmentation results. The findings of this study provide insights for automated training data preparation. •Generating pseudo labels using weakly and self-supervised learning techniques.•Proposing a novel pseudo label generation method for construction vehicles.•Demonstrating the effectiveness of pseudo labels for segmentation models•Providing polygon annotations for the AIM dataset for segmentation-related study.
ISSN:0926-5805
1872-7891
DOI:10.1016/j.autcon.2023.104862