Quilt-1M: One Million Image-Text Pairs for Histopathology
Recent accelerations in multi-modal applications have been made possible with the plethora of image and text data available online. However, the scarcity of analogous data in the medical field, specifically in histopathology, has slowed comparable progress. To enable similar representation learning...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
19.06.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Recent accelerations in multi-modal applications have been made possible with
the plethora of image and text data available online. However, the scarcity of
analogous data in the medical field, specifically in histopathology, has slowed
comparable progress. To enable similar representation learning for
histopathology, we turn to YouTube, an untapped resource of videos, offering
$1,087$ hours of valuable educational histopathology videos from expert
clinicians. From YouTube, we curate QUILT: a large-scale vision-language
dataset consisting of $802, 144$ image and text pairs. QUILT was automatically
curated using a mixture of models, including large language models, handcrafted
algorithms, human knowledge databases, and automatic speech recognition. In
comparison, the most comprehensive datasets curated for histopathology amass
only around $200$K samples. We combine QUILT with datasets from other sources,
including Twitter, research papers, and the internet in general, to create an
even larger dataset: QUILT-1M, with $1$M paired image-text samples, marking it
as the largest vision-language histopathology dataset to date. We demonstrate
the value of QUILT-1M by fine-tuning a pre-trained CLIP model. Our model
outperforms state-of-the-art models on both zero-shot and linear probing tasks
for classifying new histopathology images across $13$ diverse patch-level
datasets of $8$ different sub-pathologies and cross-modal retrieval tasks. |
---|---|
DOI: | 10.48550/arxiv.2306.11207 |