Video Event Detection Using Motion Relativity and Feature Selection

Event detection plays an essential role in video content analysis. In this paper, we present our approach based on motion relativity and feature selection for video event detection. First, we propose a new motion feature, namely Expanded Relative Motion Histogram of Bag-of-Visual-Words (ERMH-BoW) to...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on multimedia Vol. 16; no. 5; pp. 1303 - 1315
Main Authors Wang, Feng, Sun, Zhanhu, Jiang, Yu-Gang, Ngo, Chong-Wah
Format Journal Article
LanguageEnglish
Published New York, NY IEEE 01.08.2014
Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Event detection plays an essential role in video content analysis. In this paper, we present our approach based on motion relativity and feature selection for video event detection. First, we propose a new motion feature, namely Expanded Relative Motion Histogram of Bag-of-Visual-Words (ERMH-BoW) to employ motion relativity for event detection. In ERMH-BoW, by representing what aspect of an event with Bag-of-Visual-Words (BoW), we construct relative motion histograms between different visual words to depict the objects' activities or how aspect of the event. ERMH-BoW thus integrates both what and how aspects for a complete event description. Meanwhile, we show that by employing motion relativity, ERMH-BoW is invariant to the varying camera movement and able to honestly describe the object activities in an event. Furthermore, compared with other motion features, ERMH-BoW encodes not only the motion of objects, but also the interactions between different objects/scenes. Second, to address the high-dimensionality problem of the ERMH-BoW feature, we further propose an approach based on information gain and informativeness weighting to select a cleaner and more discriminative set of features. Our experiments carried out on several challenging datasets provided by TRECVID for the MED (Multimedia Event Detection) task demonstrate that our proposed approach outperforms the state-of-the-art approaches for video event detection.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2014.2315780