Enhancing the ophthalmic AI assessment with a fundus image quality classifier using local and global attention mechanisms

The assessment of image quality (IQA) plays a pivotal role in the realm of image-based computer-aided diagnosis techniques, with fundus imaging standing as the primary method for the screening and diagnosis of ophthalmic diseases. Conventional studies on fundus IQA tend to rely on simplistic dataset...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in medicine Vol. 11; p. 1418048
Main Authors Wang, Shengzhan, Shen, Wenyue, Gao, Zhiyuan, Jiang, Xiaoyu, Wang, Yaqi, Li, Yunxiang, Ma, Xiaoyu, Wang, Wenhao, Xin, Shuanghua, Ren, Weina, Jin, Kai, Ye, Juan
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Media S.A 07.08.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The assessment of image quality (IQA) plays a pivotal role in the realm of image-based computer-aided diagnosis techniques, with fundus imaging standing as the primary method for the screening and diagnosis of ophthalmic diseases. Conventional studies on fundus IQA tend to rely on simplistic datasets for evaluation, predominantly focusing on either local or global information, rather than a synthesis of both. Moreover, the interpretability of these studies often lacks compelling evidence. In order to address these issues, this study introduces the Local and Global Attention Aggregated Deep Neural Network (LGAANet), an innovative approach that integrates both local and global information for enhanced analysis. The LGAANet was developed and validated using a Multi-Source Heterogeneous Fundus (MSHF) database, encompassing a diverse collection of images. This dataset includes 802 color fundus photography (CFP) images (302 from portable cameras), and 500 ultrawide-field (UWF) images from 904 patients with diabetic retinopathy (DR) and glaucoma, as well as healthy individuals. The assessment of image quality was meticulously carried out by a trio of ophthalmologists, leveraging the human visual system as a benchmark. Furthermore, the model employs attention mechanisms and saliency maps to bolster its interpretability. In testing with the CFP dataset, LGAANet demonstrated remarkable accuracy in three critical dimensions of image quality (illumination, clarity and contrast based on the characteristics of human visual system, and indicates the potential aspects to improve the image quality), recording scores of 0.947, 0.924, and 0.947, respectively. Similarly, when applied to the UWF dataset, the model achieved accuracies of 0.889, 0.913, and 0.923, respectively. These results underscore the efficacy of LGAANet in distinguishing between varying degrees of image quality with high precision. To our knowledge, LGAANet represents the inaugural algorithm trained on an MSHF dataset specifically for fundus IQA, marking a significant milestone in the advancement of computer-aided diagnosis in ophthalmology. This research significantly contributes to the field, offering a novel methodology for the assessment and interpretation of fundus images in the detection and diagnosis of ocular diseases.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Guoming Zhang, Shenzhen Eye Hospital, China
Edited by: Shida Chen, Sun Yat-sen University, China
Reviewed by: Jiàn xióng, Second Affiliated Hospital of Nanchang University, China
ISSN:2296-858X
2296-858X
DOI:10.3389/fmed.2024.1418048