Automatic Visual Concept Learning for Social Event Understanding

Vision-based event analysis is extremely difficult due to the various concepts (object, action, and scene) contained in videos. Though visual concept-based event analysis has achieved significant progress, it has two disadvantages: visual concept is defined manually, and has only one corresponding c...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on multimedia Vol. 17; no. 3; pp. 346 - 358
Main Authors Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu, Hossain, M. Shamim
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.03.2015
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Vision-based event analysis is extremely difficult due to the various concepts (object, action, and scene) contained in videos. Though visual concept-based event analysis has achieved significant progress, it has two disadvantages: visual concept is defined manually, and has only one corresponding classifier in traditional methods. To deal with these issues, we propose a novel automatic visual concept learning algorithm for social event understanding in videos. First, instead of defining visual concept manually, we propose an effective automatic concept mining algorithm with the help of Wikipedia, N-gram Web services, and Flickr. Then, based on the learned visual concept, we propose a novel boosting concept learning algorithm to iteratively learn multiple classifiers for each concept to enhance its representative discriminability. The extensive experimental evaluations on the collected dataset well demonstrate the effectiveness of the proposed algorithm for social event understanding.
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2015.2393635