Adapting Complex Event Detection to Perceptual Domain Shifts

Human decision-making, as well as control of autonomous systems, have deployed deep learning models for detecting complex events from unstructured sensory data. However, the strong performance of these models is restricted to events with short intervals of time and space due to the limited context m...

Full description

Saved in:
Bibliographic Details
Published inMILCOM IEEE Military Communications Conference pp. 1 - 6
Main Authors Wang, Brian, de Gortari Briseno, Julian, Han, Liying, Phillips, Henry, Craighead, Jeffrey, Purman, Ben, Kaplan, Lance, Srivastava, Mani
Format Conference Proceeding
LanguageEnglish
Published IEEE 28.10.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Human decision-making, as well as control of autonomous systems, have deployed deep learning models for detecting complex events from unstructured sensory data. However, the strong performance of these models is restricted to events with short intervals of time and space due to the limited context memory of their architectures. Thus, detecting events that transpire over long periods of time with multiple spatially distant sensor sources (known as complex events) remains challenging for these purely neural-based methods, particularly as environmental conditions and object appearances change. In recent years, neurosymbolic approaches have been proposed that use both neural-based perception and symbolic reasoning for capturing complex events. However, these approaches still face issues of adaptation to perceptual domain shift in complex events. We address these problems in the context of a prototype neurosymbolic system called DANCER, which performs Domain Adaptation and Neurosymbolic inference in Complex Event Reasoning. DANCER aims to provide domain adaptation in a post-deployment setting while minimizing runtime user burden for annotation. To enable training and evaluation of DANCER, we also provide a physics-based synthetic sensor data generator to create videos given complex scenario specifications. We evaluate DANCER on a dataset of generated synthetic data. We show that DANCER yields a 48% increase in accuracy of complex event detection using domain adaptation while significantly reducing the annotation time of our synthetic complex events by up to 2.7x, demonstrating DANCER's ability to effectively detect complex events under perceptual domain shift.
ISSN:2155-7586
DOI:10.1109/MILCOM61039.2024.10773796