Channelized Axial Attention for Semantic Segmentation -- Considering Channel Relation within Spatial Attention for Semantic Segmentation
Spatial and channel attentions, modelling the semantic interdependencies in spatial and channel dimensions respectively, have recently been widely used for semantic segmentation. However, computing spatial and channel attentions separately sometimes causes errors, especially for those difficult case...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.01.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Spatial and channel attentions, modelling the semantic interdependencies in
spatial and channel dimensions respectively, have recently been widely used for
semantic segmentation. However, computing spatial and channel attentions
separately sometimes causes errors, especially for those difficult cases. In
this paper, we propose Channelized Axial Attention (CAA) to seamlessly
integrate channel attention and spatial attention into a single operation with
negligible computation overhead. Specifically, we break down the dot-product
operation of the spatial attention into two parts and insert channel relation
in between, allowing for independently optimized channel attention on each
spatial location. We further develop grouped vectorization, which allows our
model to run with very little memory consumption without slowing down the
running speed. Comparative experiments conducted on multiple benchmark
datasets, including Cityscapes, PASCAL Context, and COCO-Stuff, demonstrate
that our CAA outperforms many state-of-the-art segmentation models (including
dual attention) on all tested datasets. |
---|---|
DOI: | 10.48550/arxiv.2101.07434 |