Can visual language models resolve textual ambiguity with visual cues? Let visual puns tell you
Humans possess multimodal literacy, allowing them to actively integrate information from various modalities to form reasoning. Faced with challenges like lexical ambiguity in text, we supplement this with other modalities, such as thumbnail images or textbook illustrations. Is it possible for machin...
Saved in:
Main Authors | , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
01.10.2024
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2410.01023 |
Cover
Summary: | Humans possess multimodal literacy, allowing them to actively integrate
information from various modalities to form reasoning. Faced with challenges
like lexical ambiguity in text, we supplement this with other modalities, such
as thumbnail images or textbook illustrations. Is it possible for machines to
achieve a similar multimodal understanding capability? In response, we present
Understanding Pun with Image Explanations (UNPIE), a novel benchmark designed
to assess the impact of multimodal inputs in resolving lexical ambiguities.
Puns serve as the ideal subject for this evaluation due to their intrinsic
ambiguity. Our dataset includes 1,000 puns, each accompanied by an image that
explains both meanings. We pose three multimodal challenges with the
annotations to assess different aspects of multimodal literacy; Pun Grounding,
Disambiguation, and Reconstruction. The results indicate that various Socratic
Models and Visual-Language Models improve over the text-only models when given
visual context, particularly as the complexity of the tasks increases. |
---|---|
DOI: | 10.48550/arxiv.2410.01023 |