REBUS: A Robust Evaluation Benchmark of Understanding Symbols

We propose a new benchmark evaluating the performance of multimodal large language models on rebus puzzles. The dataset covers 333 original examples of image-based wordplay, cluing 13 categories such as movies, composers, major cities, and food. To achieve good performance on the benchmark of identi...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Gritsevskiy, Andrew, Panickssery, Arjun, Kirtland, Aaron, Kauffman, Derik, Gundlach, Hans, Gritsevskaya, Irina, Cavanagh, Joe, Chiang, Jonathan, La Roux, Lydia, Hung, Michelle
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 03.06.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We propose a new benchmark evaluating the performance of multimodal large language models on rebus puzzles. The dataset covers 333 original examples of image-based wordplay, cluing 13 categories such as movies, composers, major cities, and food. To achieve good performance on the benchmark of identifying the clued word or phrase, models must combine image recognition and string manipulation with hypothesis testing, multi-step reasoning, and an understanding of human cognition, making for a complex, multimodal evaluation of capabilities. We find that GPT-4o significantly outperforms all other models, followed by proprietary models outperforming all other evaluated models. However, even the best model has a final accuracy of only 42\%, which goes down to just 7\% on hard puzzles, highlighting the need for substantial improvements in reasoning. Further, models rarely understand all parts of a puzzle, and are almost always incapable of retroactively explaining the correct answer. Our benchmark can therefore be used to identify major shortcomings in the knowledge and reasoning of multimodal large language models.
ISSN:2331-8422