Weak supervision for generating pixel–level annotations in scene text segmentation
•Weakly supervised generation of pixel-level annotations from bounding-boxes.•Two new pixel-level annotated datasets, COCO TS and MLT_S, were generated and released.•COCO_TS and MLT_S allow to pre-train a network more efficiently than synthetic data.•The SMANet architecture, tailored for scene text...
Saved in:
Published in | Pattern recognition letters Vol. 138; pp. 1 - 7 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Amsterdam
Elsevier B.V
01.10.2020
Elsevier Science Ltd |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | •Weakly supervised generation of pixel-level annotations from bounding-boxes.•Two new pixel-level annotated datasets, COCO TS and MLT_S, were generated and released.•COCO_TS and MLT_S allow to pre-train a network more efficiently than synthetic data.•The SMANet architecture, tailored for scene text segmentation, has been proposed.
Providing pixel–level supervisions for scene text segmentation is inherently difficult and costly, so that only few small datasets are available for this task. To face the scarcity of training data, previous approaches based on Convolutional Neural Networks (CNNs) rely on the use of a synthetic dataset for pre–training. However, synthetic data cannot reproduce the complexity and variability of natural images. In this work, we propose to use a weakly supervised learning approach to reduce the domain–shift between synthetic and real data. Leveraging the bounding–box supervision of the COCO–Text and the MLT datasets, we generate weak pixel–level supervisions of real images. In particular, the COCO–Text–Segmentation (COCO_TS) and the MLT–Segmentation (MLT_S) datasets are created and released. These two datasets are used to train a CNN, the Segmentation Multiscale Attention Network (SMANet), which is specifically designed to face some peculiarities of the scene text segmentation task. The SMANet is trained end–to–end on the proposed datasets, and the experiments show that COCO_TS and MLT_S are a valid alternative to synthetic images, allowing to use only a fraction of the training samples, with a significant improvement in performance. |
---|---|
ISSN: | 0167-8655 1872-7344 |
DOI: | 10.1016/j.patrec.2020.06.023 |