Multimodal activity recognition with local block CNN and attention-based spatial weighted CNN

Deep learning based human activity recognition approach combines spatial and temporal information to complete the recognition task. The temporal information is extracted by optical flow, which is always compensated by the warping method in order to achieve better performance. However, these methods...

Full description

Saved in:
Bibliographic Details
Published inJournal of visual communication and image representation Vol. 60; pp. 38 - 43
Main Authors Zhu, Suguo, Fang, Zhenying, Wang, Yi, Yu, Jun, Du, Junping
Format Journal Article
LanguageEnglish
Published Elsevier Inc 01.04.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Deep learning based human activity recognition approach combines spatial and temporal information to complete the recognition task. The temporal information is extracted by optical flow, which is always compensated by the warping method in order to achieve better performance. However, these methods usually take the global feature as the starting point, only consider global information of video frames, and ignore local information that reflects the changes of human behavior, causing the algorithm to be sensitive to the external environment such as occlusion, illumination change. In view of the above problems, this paper fuses the local spatial features of video frames, global spatial features and temporal features to recognize different actions, and further extracts the visual attention weight to make constraint on the global spatial features. Experiments show that the algorithm proposed in this paper has better accuracy compared with the existing methods.
ISSN:1047-3203
1095-9076
DOI:10.1016/j.jvcir.2018.12.026