OneDiff: A Generalist Model for Image Difference Captioning
In computer vision, Image Difference Captioning (IDC) is crucial for accurately describing variations between closely related images. Traditional IDC methods often rely on specialist models, which restrict their applicability across varied contexts. This paper introduces the OneDiff model, a novel g...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
08.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | In computer vision, Image Difference Captioning (IDC) is crucial for
accurately describing variations between closely related images. Traditional
IDC methods often rely on specialist models, which restrict their applicability
across varied contexts. This paper introduces the OneDiff model, a novel
generalist approach that utilizes a robust vision-language model architecture,
integrating a siamese image encoder with a Visual Delta Module. This innovative
configuration allows for the precise detection and articulation of fine-grained
differences between image pairs. OneDiff is trained through a dual-phase
strategy, encompassing Coupled Sample Training and multi-task learning across a
diverse array of data types, supported by our newly developed DiffCap Dataset.
This dataset merges real-world and synthetic data, enhancing the training
process and bolstering the model's robustness. Extensive testing on diverse IDC
benchmarks, such as Spot-the-Diff, CLEVR-Change, and Birds-to-Words, shows that
OneDiff consistently outperforms existing state-of-the-art models in accuracy
and adaptability, achieving improvements of up to 85\% CIDEr points in average.
By setting a new benchmark in IDC, OneDiff paves the way for more versatile and
effective applications in detecting and describing visual differences. The
code, models, and data will be made publicly available. |
---|---|
DOI: | 10.48550/arxiv.2407.05645 |