A dataset of clinically generated visual questions and answers about radiology images

Radiology images are an essential part of clinical decision making and population screening, e.g., for cancer. Automated systems could help clinicians cope with large amounts of images by answering questions about the image contents. An emerging area of artificial intelligence, Visual Question Answe...

Full description

Saved in:
Bibliographic Details
Published inScientific data Vol. 5; no. 1; pp. 180251 - 10
Main Authors Lau, Jason J., Gayen, Soumya, Ben Abacha, Asma, Demner-Fushman, Dina
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 20.11.2018
Nature Publishing Group
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Radiology images are an essential part of clinical decision making and population screening, e.g., for cancer. Automated systems could help clinicians cope with large amounts of images by answering questions about the image contents. An emerging area of artificial intelligence, Visual Question Answering (VQA) in the medical domain explores approaches to this form of clinical decision support. Success of such machine learning tools hinges on availability and design of collections composed of medical images augmented with question-answer pairs directed at the content of the image. We introduce VQA-RAD, the first manually constructed dataset where clinicians asked naturally occurring questions about radiology images and provided reference answers. Manual categorization of images and questions provides insight into clinically relevant tasks and the natural language to phrase them. Evaluating with well-known algorithms, we demonstrate the rich quality of this dataset over other automatically constructed ones. We propose VQA-RAD to encourage the community to design VQA tools with the goals of improving patient care. Design Type(s) image creation and editing objective • anatomical image analysis objective Measurement Type(s) image analysis Technology Type(s) visual observation method Factor Type(s) question type • answer type Sample Characteristic(s) Homo sapiens • head • chest • abdomen Machine-accessible metadata file describing the reported data (ISA-Tab format)
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Undefined-1
ObjectType-Feature-3
content type line 23
J.L. conceptualized the study, selected images, designed annotator interface, manually reviewed data, analyzed data, and contributed to writing and editing of the manuscript. S.G. built interface and collection for annotator interface, performed data analysis with models and contributed to editing of the manuscript. A.B. analyzed and processed the datasets, performed deep learning experiments, participated to the evaluation process and contributed to the editing of the manuscript. D.D. oversaw study design and implementation and contributed to writing and editing of manuscript.
ISSN:2052-4463
2052-4463
DOI:10.1038/sdata.2018.251