MatViX: Multimodal Information Extraction from Visually Rich Articles

Multimodal information extraction (MIE) is crucial for scientific literature, where valuable data is often spread across text, figures, and tables. In materials science, extracting structured information from research articles can accelerate the discovery of new materials. However, the multimodal na...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Ghazal Khalighinejad, Scott, Sharon, Liu, Ollie, Anderson, Kelly L, Rickard Stureborg, Tyagi, Aman, Dhingra, Bhuwan
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 27.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Multimodal information extraction (MIE) is crucial for scientific literature, where valuable data is often spread across text, figures, and tables. In materials science, extracting structured information from research articles can accelerate the discovery of new materials. However, the multimodal nature and complex interconnections of scientific content present challenges for traditional text-based methods. We introduce \textsc{MatViX}, a benchmark consisting of \(324\) full-length research articles and \(1,688\) complex structured JSON files, carefully curated by domain experts. These JSON files are extracted from text, tables, and figures in full-length documents, providing a comprehensive challenge for MIE. We introduce an evaluation method to assess the accuracy of curve similarity and the alignment of hierarchical structures. Additionally, we benchmark vision-language models (VLMs) in a zero-shot manner, capable of processing long contexts and multimodal inputs, and show that using a specialized model (DePlot) can improve performance in extracting curves. Our results demonstrate significant room for improvement in current models. Our dataset and evaluation code are available\footnote{\url{https://matvix-bench.github.io/}}.
ISSN:2331-8422