DCACorrCapsNet: A deep channel‐attention correlative capsule network for COVID‐19 detection based on multi‐source medical images
The raging trend of COVID‐19 in the world has become more and more serious since 2019, causing large‐scale human deaths and affecting production and life. Generally speaking, the methods of detecting COVID‐19 mainly include the evaluation of human disease characterization, clinical examination and m...
Saved in:
Published in | IET image processing Vol. 17; no. 4; pp. 988 - 1000 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Wiley
01.03.2023
|
Subjects | |
Online Access | Get full text |
ISSN | 1751-9659 1751-9667 |
DOI | 10.1049/ipr2.12690 |
Cover
Summary: | The raging trend of COVID‐19 in the world has become more and more serious since 2019, causing large‐scale human deaths and affecting production and life. Generally speaking, the methods of detecting COVID‐19 mainly include the evaluation of human disease characterization, clinical examination and medical imaging. Among them, CT and X‐ray screening is conducive to doctors and patients' families to observe and diagnose the severity and development of the COVID‐19 more intuitively. Manual diagnosis of medical images leads to low the efficiency, and long‐term tired gaze will decline the diagnosis accuracy. Therefore, a fully automated method is needed to assist processing and analysing medical images. Deep learning methods can rapidly help differentiate COVID‐19 from other pneumonia‐related diseases or healthy subjects. However, due to the limited labelled images and the monotony of models and data, the learning results are biased, resulting in inaccurate auxiliary diagnosis. To address these issues, a hybrid model: deep channel‐attention correlative capsule network, for channel‐attention based spatial feature extraction, correlative feature extraction, and fused feature classification is proposed. Experiments are validated on X‐ray and CT image datasets, and the results outperform a large number of existing state‐of‐the‐art studies.
We first develop a multi‐feature extractor that can generate Fisher vector in time‐ and frequency‐ domain and channel‐attention based convolutional feature.
A multi‐level capsules is embedded with primary capsules, correlative capsules and digit capsules, which can learn and capture the correlation between primary capsules after the convolution operation.
We use the four outputs of the previous two modules as input, and use CatNN and GBDT2NN in DeepGBM to train separately to obtain the results of multi‐feature fusion and achieve classification. |
---|---|
ISSN: | 1751-9659 1751-9667 |
DOI: | 10.1049/ipr2.12690 |