A comprehensive AI model development framework for consistent Gleason grading

Background Artificial Intelligence(AI)-based solutions for Gleason grading hold promise for pathologists, while image quality inconsistency, continuous data integration needs, and limited generalizability hinder their adoption and scalability. Methods We present a comprehensive digital pathology wor...

Full description

Saved in:
Bibliographic Details
Published inCommunications medicine Vol. 4; no. 1; p. 84
Main Authors Huo, Xinmi, Ong, Kok Haur, Lau, Kah Weng, Gole, Laurent, Young, David M., Tan, Char Loo, Zhu, Xiaohui, Zhang, Chongchong, Zhang, Yonghui, Li, Longjie, Han, Hao, Lu, Haoda, Zhang, Jing, Hou, Jun, Zhao, Huanfen, Gan, Hualei, Yin, Lijuan, Wang, Xingxing, Chen, Xiaoyue, Lv, Hong, Cao, Haotian, Yu, Xiaozhen, Shi, Yabin, Huang, Ziling, Marini, Gabriel, Xu, Jun, Liu, Bingxian, Chen, Bingxian, Wang, Qiang, Gui, Kun, Shi, Wenzhao, Sun, Yingying, Chen, Wanyuan, Cao, Dalong, Sanders, Stephan J., Lee, Hwee Kuan, Hue, Susan Swee-Shan, Yu, Weimiao, Tan, Soo Yong
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 09.05.2024
Springer Nature B.V
Nature Portfolio
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Background Artificial Intelligence(AI)-based solutions for Gleason grading hold promise for pathologists, while image quality inconsistency, continuous data integration needs, and limited generalizability hinder their adoption and scalability. Methods We present a comprehensive digital pathology workflow for AI-assisted Gleason grading. It incorporates A!MagQC (image quality control), A!HistoClouds (cloud-based annotation), Pathologist-AI Interaction (PAI) for continuous model improvement, Trained on Akoya-scanned images only, the model utilizes color augmentation and image appearance migration to address scanner variations. We evaluate it on Whole Slide Images (WSI) from another five scanners and conduct validations with pathologists to assess AI efficacy and PAI. Results Our model achieves an average F1 score of 0.80 on annotations and 0.71 Quadratic Weighted Kappa on WSIs for Akoya-scanned images. Applying our generalization solution increases the average F1 score for Gleason pattern detection from 0.73 to 0.88 on images from other scanners. The model accelerates Gleason scoring time by 43% while maintaining accuracy. Additionally, PAI improve annotation efficiency by 2.5 times and led to further improvements in model performance. Conclusions This pipeline represents a notable advancement in AI-assisted Gleason grading for improved consistency, accuracy, and efficiency. Unlike previous methods limited by scanner specificity, our model achieves outstanding performance across diverse scanners. This improvement paves the way for its seamless integration into clinical workflows. Plain language summary Gleason grading is a well-accepted diagnostic standard to assess the severity of prostate cancer in patients’ tissue samples, based on how abnormal the cells in their prostate tumor look under a microscope. This process can be complex and time-consuming. We explore how artificial intelligence (AI) can help pathologists perform Gleason grading more efficiently and consistently. We build an AI-based system which automatically checks image quality, standardizes the appearance of images from different equipment, learns from pathologists’ feedback, and constantly improves model performance. Testing shows that our approach achieves consistent results across different equipment and improves efficiency of the grading process. With further testing and implementation in the clinic, our approach could potentially improve prostate cancer diagnosis and management. Huo, Ong et al. present a comprehensive workflow to try to overcome key limitations in prior approaches for artificial intelligence (AI)-assisted prostate cancer Gleason grading. Their approach incorporates automated quality control, efficient annotation and visualization, and pathologist-AI interaction.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2730-664X
2730-664X
DOI:10.1038/s43856-024-00502-1