Mixed local channel attention for object detection

Attention mechanism, one of the most extensively utilized components in computer vision, can assist neural networks in emphasizing significant elements and suppressing irrelevant ones. However, the vast majority of channel attention mechanisms only contain channel feature information and ignore spat...

Full description

Saved in:
Bibliographic Details
Published inEngineering applications of artificial intelligence Vol. 123; p. 106442
Main Authors Wan, Dahang, Lu, Rongsheng, Shen, Siyuan, Xu, Ting, Lang, Xianli, Ren, Zhijie
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.08.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Attention mechanism, one of the most extensively utilized components in computer vision, can assist neural networks in emphasizing significant elements and suppressing irrelevant ones. However, the vast majority of channel attention mechanisms only contain channel feature information and ignore spatial feature information, resulting in poor model representation effect or object detection performance, and the spatial attention modules were often complex and expensive. In order to strike a balance between performance and complexity, this paper proposes a lightweight Mixed Local Channel Attention (MLCA) module to improve the performance of the object detection network, and it can simultaneously incorporate both channel information and spatial information, as well as local information and global information to improve the expression effect of the network. On this basis, the MobileNet-Attention-YOLO(MAY) algorithm for comparing the performance of various attention modules is presented. On the Pascal VOC and SMID datasets, MLCA achieves a better balance between model representation efficacy, performance, and complexity than alternative attention techniques. Against the Squeeze-and-Excitation(SE) attention mechanism on the PASCAL VOC dataset and the Coordinate Attention(CA) method on the SIMD dataset, the mAP is enhanced by 1.0 % and 1.5 %, respectively. [Display omitted] •Proposed a lightweight Mixed Local Channel Attention (MLCA) method.•Proposed a new object detection network called MobileNet-Attention-YOLO (MAY).•Verified the feasibility and effectiveness of MLCA and MAY.
ISSN:0952-1976
DOI:10.1016/j.engappai.2023.106442