A-OKVQA: A Benchmark for Visual Question Answering Using World Knowledge

The Visual Question Answering (VQA) task aspires to provide a meaningful testbed for the development of AI models that can jointly reason over visual and natural language inputs. Despite a proliferation of VQA datasets, this goal is hindered by a set of common limitations. These include a reliance o...

Full description

Saved in:
Bibliographic Details
Published inComputer Vision - ECCV 2022 Vol. 13668; pp. 146 - 162
Main Authors Schwenk, Dustin, Khandelwal, Apoorv, Clark, Christopher, Marino, Kenneth, Mottaghi, Roozbeh
Format Book Chapter
LanguageEnglish
Published Switzerland Springer 01.01.2022
Springer Nature Switzerland
SeriesLecture Notes in Computer Science
Online AccessGet full text

Cover

Loading…
More Information
Summary:The Visual Question Answering (VQA) task aspires to provide a meaningful testbed for the development of AI models that can jointly reason over visual and natural language inputs. Despite a proliferation of VQA datasets, this goal is hindered by a set of common limitations. These include a reliance on relatively simplistic questions that are repetitive in both concepts and linguistic structure, little world knowledge needed outside of the paired image, and limited reasoning required to arrive at the correct answer. We introduce A-OKVQA, a crowdsourced dataset composed of a diverse set of about 25K questions requiring a broad base of commonsense and world knowledge to answer. In contrast to existing knowledge-based VQA datasets, the questions generally cannot be answered by simply querying a knowledge base, and instead require some form of commonsense reasoning about the scene depicted in the image. We demonstrate the potential of this new dataset through a detailed analysis of its contents and baseline performance measurements over a variety of state-of-the-art vision–language models.
Bibliography:Supplementary InformationThe online version contains supplementary material available at https://doi.org/10.1007/978-3-031-20074-8_9.
ISBN:303120073X
9783031200731
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-031-20074-8_9