Text Attention and Focal Negative Loss for Scene Text Detection

This paper proposes a novel attention mechanism and a fancy loss function for scene text detectors. Specifically, the attention mechanism can effectively identify the text regions by learning an attention mask automatically. The fine-grained attention mask is directly incorporated into the convoluti...

Full description

Saved in:
Bibliographic Details
Published in2019 International Joint Conference on Neural Networks (IJCNN) pp. 1 - 8
Main Authors Huang, Randong, Xu, Bo
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.07.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper proposes a novel attention mechanism and a fancy loss function for scene text detectors. Specifically, the attention mechanism can effectively identify the text regions by learning an attention mask automatically. The fine-grained attention mask is directly incorporated into the convolutional feature maps of a neural network to produce graininess-aware feature maps, which essentially obstruct the background inference and especially emphasize the text regions. Therefore, our graininess-aware feature maps concentrate on text regions, in especial those of exceedingly small size. Additionally, to address the extreme text-background class imbalance during training, we also propose a newfangled loss function, named Focal Negative Loss (FNL). The proposed loss function is able to down-weight the loss assigned to easy negative samples. Consequently, the proposed FNL can make training focused on hard negative samples. To evaluate the effectiveness of our text attention module and FNL, we integrate them into the efficient and accurate scene text detector (EAST). The comprehensive experimental results demonstrate that our text attention module and FNL can increase the performance of EAST by F-score of 3.98% on ICDAR2015 dataset and 1.87% on MSRA-TD500 dataset.
ISSN:2161-4407
DOI:10.1109/IJCNN.2019.8851959