Decomposing Disease Descriptions for Enhanced Pathology Detection: A Multi-Aspect Vision-Language Pre-training Framework

CVPR2024 Medical vision language pre-training (VLP) has emerged as a frontier of research, enabling zero-shot pathological recognition by comparing the query image with the textual descriptions for each disease. Due to the complex semantics of biomedical texts, current methods struggle to align medi...

Full description

Saved in:
Bibliographic Details
Main Authors Phan, Vu Minh Hieu, Xie, Yutong, Qi, Yuankai, Liu, Lingqiao, Liu, Liyang, Zhang, Bowen, Liao, Zhibin, Wu, Qi, To, Minh-Son, Verjans, Johan W
Format Journal Article
LanguageEnglish
Published 12.03.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:CVPR2024 Medical vision language pre-training (VLP) has emerged as a frontier of research, enabling zero-shot pathological recognition by comparing the query image with the textual descriptions for each disease. Due to the complex semantics of biomedical texts, current methods struggle to align medical images with key pathological findings in unstructured reports. This leads to the misalignment with the target disease's textual representation. In this paper, we introduce a novel VLP framework designed to dissect disease descriptions into their fundamental aspects, leveraging prior knowledge about the visual manifestations of pathologies. This is achieved by consulting a large language model and medical experts. Integrating a Transformer module, our approach aligns an input image with the diverse elements of a disease, generating aspect-centric image representations. By consolidating the matches from each aspect, we improve the compatibility between an image and its associated disease. Additionally, capitalizing on the aspect-oriented representations, we present a dual-head Transformer tailored to process known and unknown diseases, optimizing the comprehensive detection efficacy. Conducting experiments on seven downstream datasets, ours improves the accuracy of recent methods by up to 8.56% and 17.26% for seen and unseen categories, respectively. Our code is released at https://github.com/HieuPhan33/MAVL.
DOI:10.48550/arxiv.2403.07636