Learning to recognize Thoracic Disease in Chest X-rays with Knowledge-Guided Deep Zoom Neural Networks

Automatic and accurate thorax disease diagnosis in Chest X-ray (CXR) image plays an essential role in clinical assist analysis. However, due to its imaging noise regions and the similarity of visual features between diseases and their surroundings, the precise analysis of thoracic disease becomes a...

Full description

Saved in:
Bibliographic Details
Published inIEEE access Vol. 8; p. 1
Main Authors Wang, Kun, Zhang, Xiaohong, Huang, Sheng, Chen, Feiyu, Zhang, Xiangbo, Huangfu, Luwen
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.01.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Automatic and accurate thorax disease diagnosis in Chest X-ray (CXR) image plays an essential role in clinical assist analysis. However, due to its imaging noise regions and the similarity of visual features between diseases and their surroundings, the precise analysis of thoracic disease becomes a challenging problem. In this study, we propose a novel knowledge-guided deep zoom neural network (KGZNet) which is a data-driven model. Our approach leverage prior medical knowledge to guide its training process, due to thoracic diseases typically limit within the lung regions. Also, we utilized weakly-supervised learning (WSL) to search for finer regions without using annotated samples. Learning on each scale consists of a classification sub-network. The KGZNet starts from global images, and iteratively generates discriminative part from coarse to fine; while a finer scale sub-network takes as input an amplified attended discriminative region from previous scales in a recurrent way. Specifically, we first train a robust modified U-Net model of lung segmentation and capture the lung area from the original CXR image through the Lung Region Generator. Then, guided by the attention heatmap, we obtain a finer discriminative lesion region from the lung region images by the Lesion Region Generator. Lastly, the most discriminative features knowledge is fused, and the complementary features information is learned for final disease prediction. Extensive experiments demonstrate that our method can effectively leverage discriminative region information, and significantly outperforms the other state-of-the-art methods in the thoracic disease recognition task. Furthermore, the proposed KGZNet can gradually learn the discriminative region from coarse to fine in a mutually reinforced way. The code is will available at: https://github.com/ISSE-AILab/KGZNet.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3020579