AANet: Adaptive Attention Network for COVID-19 Detection From Chest X-Ray Images

Accurate and rapid diagnosis of COVID-19 using chest X-ray (CXR) plays an important role in large-scale screening and epidemic prevention. Unfortunately, identifying COVID-19 from the CXR images is challenging as its radiographic features have a variety of complex appearances, such as widespread gro...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 32; no. 11; pp. 4781 - 4792
Main Authors Lin, Zhijie, He, Zhaoshui, Xie, Shengli, Wang, Xu, Tan, Ji, Lu, Jun, Tan, Beihai
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.11.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Accurate and rapid diagnosis of COVID-19 using chest X-ray (CXR) plays an important role in large-scale screening and epidemic prevention. Unfortunately, identifying COVID-19 from the CXR images is challenging as its radiographic features have a variety of complex appearances, such as widespread ground-glass opacities and diffuse reticular-nodular opacities. To solve this problem, we propose an adaptive attention network (AANet), which can adaptively extract the characteristic radiographic findings of COVID-19 from the infected regions with various scales and appearances. It contains two main components: an adaptive deformable ResNet and an attention-based encoder. First, the adaptive deformable ResNet, which adaptively adjusts the receptive fields to learn feature representations according to the shape and scale of infected regions, is designed to handle the diversity of COVID-19 radiographic features. Then, the attention-based encoder is developed to model nonlocal interactions by self-attention mechanism, which learns rich context information to detect the lesion regions with complex shapes. Extensive experiments on several public datasets show that the proposed AANet outperforms state-of-the-art methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2021.3114747