EgoOops: A Dataset for Mistake Action Detection from Egocentric Videos with Procedural Texts

Mistake action detection from egocentric videos is crucial for developing intelligent archives that detect workers' errors and provide feedback. Previous studies have been limited to specific domains, focused on detecting mistakes from videos without procedural texts, and analyzed whether actio...

Full description

Saved in:
Bibliographic Details
Main Authors Haneji, Yuto, Nishimura, Taichi, Kameko, Hirotaka, Shirai, Keisuke, Yoshida, Tomoya, Kajimura, Keiya, Yamamoto, Koki, Cui, Taiyu, Nishimoto, Tomohiro, Mori, Shinsuke
Format Journal Article
LanguageEnglish
Published 07.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Mistake action detection from egocentric videos is crucial for developing intelligent archives that detect workers' errors and provide feedback. Previous studies have been limited to specific domains, focused on detecting mistakes from videos without procedural texts, and analyzed whether actions are mistakes. To address these limitations, in this paper, we propose the EgoOops dataset, which includes egocentric videos, procedural texts, and three types of annotations: video-text alignment, mistake labels, and descriptions for mistakes. EgoOops covers five procedural domains and includes 50 egocentric videos. The video-text alignment allows the model to detect mistakes based on both videos and procedural texts. The mistake labels and descriptions enable detailed analysis of real-world mistakes. Based on EgoOops, we tackle two tasks: video-text alignment and mistake detection. For video-text alignment, we enhance the recent StepFormer model with an additional loss for fine-tuning. Based on the alignment results, we propose a multi-modal classifier to predict mistake labels. In our experiments, the proposed methods achieve higher performance than the baselines. In addition, our ablation study demonstrates the effectiveness of combining videos and texts. We will release the dataset and codes upon publication.
DOI:10.48550/arxiv.2410.05343