Knowledge-enhanced visual-language pre-training on chest radiology images

While multi-modal foundation models pre-trained on large-scale data have been successful in natural language understanding and vision recognition, their use in medical domains is still limited due to the fine-grained nature of medical tasks and the high demand for domain knowledge. To address this c...

Full description

Saved in:
Bibliographic Details
Published inNature communications Vol. 14; no. 1; p. 4542
Main Authors Zhang, Xiaoman, Wu, Chaoyi, Zhang, Ya, Xie, Weidi, Wang, Yanfeng
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 28.07.2023
Nature Publishing Group
Nature Portfolio
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:While multi-modal foundation models pre-trained on large-scale data have been successful in natural language understanding and vision recognition, their use in medical domains is still limited due to the fine-grained nature of medical tasks and the high demand for domain knowledge. To address this challenge, we propose an approach called Knowledge-enhanced Auto Diagnosis (KAD) which leverages existing medical domain knowledge to guide vision-language pre-training using paired chest X-rays and radiology reports. We evaluate KAD on four external X-ray datasets and demonstrate that its zero-shot performance is not only comparable to that of fully supervised models but also superior to the average of three expert radiologists for three (out of five) pathologies with statistical significance. Moreover, when few-shot annotation is available, KAD outperforms all existing approaches in fine-tuning settings, demonstrating its potential for application in different clinical scenarios. Despite the success of multi-modal foundation models in natural language and vision tasks, their use in medical domains is limited. Here, the authors propose to train a foundation model for chest X-ray diagnosis that combines medical domain knowledge with vision-language representation learning.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:2041-1723
2041-1723
DOI:10.1038/s41467-023-40260-7