IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
Current foundation models exhibit impressive capabilities when prompted either with text only or with both image and text inputs. But do their capabilities change depending on the input modality? In this work, we propose $\textbf{IsoBench}$, a benchmark dataset containing problems from four major ar...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
01.04.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Current foundation models exhibit impressive capabilities when prompted
either with text only or with both image and text inputs. But do their
capabilities change depending on the input modality? In this work, we propose
$\textbf{IsoBench}$, a benchmark dataset containing problems from four major
areas: math, science, algorithms, and games. Each example is presented with
multiple $\textbf{isomorphic representations}$ of inputs, such as visual,
textual, and mathematical presentations. IsoBench provides fine-grained
feedback to diagnose performance gaps caused by the form of the representation.
Across various foundation models, we observe that on the same problem, models
have a consistent preference towards textual representations. Most prominently,
when evaluated on all IsoBench problems, Claude-3 Opus performs 28.7 points
worse when provided with images instead of text; similarly, GPT-4 Turbo is 18.7
points worse and Gemini Pro is 14.9 points worse. Finally, we present two
prompting techniques, $\textit{IsoCombination}$ and $\textit{IsoScratchPad}$,
which improve model performance by considering combinations of, and
translations between, different input representations. |
---|---|
DOI: | 10.48550/arxiv.2404.01266 |