Decomposing Disease Descriptions for Enhanced Pathology Detection: A Multi-Aspect Vision-Language Pre-training Framework
CVPR2024 Medical vision language pre-training (VLP) has emerged as a frontier of research, enabling zero-shot pathological recognition by comparing the query image with the textual descriptions for each disease. Due to the complex semantics of biomedical texts, current methods struggle to align medi...
Saved in:
Main Authors | , , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
12.03.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | CVPR2024 Medical vision language pre-training (VLP) has emerged as a frontier of
research, enabling zero-shot pathological recognition by comparing the query
image with the textual descriptions for each disease. Due to the complex
semantics of biomedical texts, current methods struggle to align medical images
with key pathological findings in unstructured reports. This leads to the
misalignment with the target disease's textual representation. In this paper,
we introduce a novel VLP framework designed to dissect disease descriptions
into their fundamental aspects, leveraging prior knowledge about the visual
manifestations of pathologies. This is achieved by consulting a large language
model and medical experts. Integrating a Transformer module, our approach
aligns an input image with the diverse elements of a disease, generating
aspect-centric image representations. By consolidating the matches from each
aspect, we improve the compatibility between an image and its associated
disease. Additionally, capitalizing on the aspect-oriented representations, we
present a dual-head Transformer tailored to process known and unknown diseases,
optimizing the comprehensive detection efficacy. Conducting experiments on
seven downstream datasets, ours improves the accuracy of recent methods by up
to 8.56% and 17.26% for seen and unseen categories, respectively. Our code is
released at https://github.com/HieuPhan33/MAVL. |
---|---|
DOI: | 10.48550/arxiv.2403.07636 |