Sparse Canonical Temporal Alignment With Deep Tensor Decomposition for Action Recognition

In this paper, we solve three problems in action recognition: sub-action, multi-subject, and multi-modality, by reducing the diversity of intra-class samples. The main stage contains canonical temporal alignment and key frames selection. As we know, temporal alignment aims to reduce the diversity of...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 26; no. 2; pp. 738 - 750
Main Authors Jia, Chengcheng, Shao, Ming, Fu, Yun
Format Journal Article
LanguageEnglish
Published United States IEEE 01.02.2017
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In this paper, we solve three problems in action recognition: sub-action, multi-subject, and multi-modality, by reducing the diversity of intra-class samples. The main stage contains canonical temporal alignment and key frames selection. As we know, temporal alignment aims to reduce the diversity of intra-class samples; however, dense frames may yield misalignment or overlapped alignment and decrease recognition performance. To overcome this problem, we propose a sparse canonical temporal alignment (SCTA) method, which selects and aligns key frames from two sequences to reduce diversity. To extract better features from the key frames, we propose a deep non-negative tensor factorization (DNTF) method to find a tensor subspace integrated with SCTA scheme. First, we model an action sequence as a third-order tensor with spatiotemporal structure. Then, we design a DNTF scheme to find a tensor subspace in both spatial and temporal directions. Particularly, in the first layer, the original tensor is decomposed into two low-rank tensors by NTF, and in the second layer, each low-rank tensor is further decomposed by tensor-train for time efficiency. Finally, our framework composed of SCTA and DNTF could solve the three problems and extract effective features for action recognition. Experiments on synthetic data, MSRDailyActivity3D, and MSRActionPairs data sets show that our method works better than competitive methods in terms of accuracy.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2016.2621664