DISC: A Large-scale Virtual Dataset for Simulating Disaster Scenarios

In this paper, we present the first large-scale synthetic dataset for visual perception in disaster scenarios, and analyze state-of-the-art methods for multiple computer vision tasks with reference baselines. We simulated before and after disaster scenarios such as fire and building collapse for fif...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) pp. 187 - 194
Main Authors Jeon, Hae-Gon, Im, Sunghoon, Lee, Byeong-Uk, Choi, Dong-Geol, Hebert, Martial, Kweon, In So
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.11.2019
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we present the first large-scale synthetic dataset for visual perception in disaster scenarios, and analyze state-of-the-art methods for multiple computer vision tasks with reference baselines. We simulated before and after disaster scenarios such as fire and building collapse for fifteen different locations in realistic virtual worlds. The dataset consists of more than 300K high-resolution stereo image pairs, all annotated with ground-truth data for semantic segmentation, depth, optical flow, surface normal estimation and camera pose estimation. To create realistic disaster scenes, we manually augmented the effects with 3D models using physical-based graphics tools. We use our dataset to train state-of-the-art methods and evaluate how well these methods can recognize the disaster situations and produce reliable results on virtual scenes as well as real-world images. The results obtained from each task are then used as inputs to the proposed visual odometry network for generating 3D maps of buildings on fire. Finally, we discuss challenges for future research.
ISSN:2153-0866
DOI:10.1109/IROS40897.2019.8967839