Text Attention and Focal Negative Loss for Scene Text Detection
This paper proposes a novel attention mechanism and a fancy loss function for scene text detectors. Specifically, the attention mechanism can effectively identify the text regions by learning an attention mask automatically. The fine-grained attention mask is directly incorporated into the convoluti...
Saved in:
Published in | 2019 International Joint Conference on Neural Networks (IJCNN) pp. 1 - 8 |
---|---|
Main Authors | , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.07.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper proposes a novel attention mechanism and a fancy loss function for scene text detectors. Specifically, the attention mechanism can effectively identify the text regions by learning an attention mask automatically. The fine-grained attention mask is directly incorporated into the convolutional feature maps of a neural network to produce graininess-aware feature maps, which essentially obstruct the background inference and especially emphasize the text regions. Therefore, our graininess-aware feature maps concentrate on text regions, in especial those of exceedingly small size. Additionally, to address the extreme text-background class imbalance during training, we also propose a newfangled loss function, named Focal Negative Loss (FNL). The proposed loss function is able to down-weight the loss assigned to easy negative samples. Consequently, the proposed FNL can make training focused on hard negative samples. To evaluate the effectiveness of our text attention module and FNL, we integrate them into the efficient and accurate scene text detector (EAST). The comprehensive experimental results demonstrate that our text attention module and FNL can increase the performance of EAST by F-score of 3.98% on ICDAR2015 dataset and 1.87% on MSRA-TD500 dataset. |
---|---|
ISSN: | 2161-4407 |
DOI: | 10.1109/IJCNN.2019.8851959 |