View knowledge transfer network for multi-view action recognition

As many data in practical applications occur or can be captured in multiple views form, multi-view action recognition has received much attention recently, due to utilizing certain complementary and heterogeneous information in various views to promote the downstream task. However, most existing met...

Full description

Saved in:
Bibliographic Details
Published inImage and vision computing Vol. 118; p. 104357
Main Authors Liang, Zixi, Yin, Ming, Gao, Junli, He, Yicheng, Huang, Weitian
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.02.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As many data in practical applications occur or can be captured in multiple views form, multi-view action recognition has received much attention recently, due to utilizing certain complementary and heterogeneous information in various views to promote the downstream task. However, most existing methods assume that multi-view data is complete, which may not always be met in real-world applications.To this end, in this paper, a novel View Knowledge Transfer Network (VKTNet) is proposed to handle multi-view action recognition, even when some views are incomplete. Specifically, the view knowledge transferring is utilized using conditional generative adversarial network(cGAN) to reproduce each view's latent representation, conditioning on the other view's information. As such, the high-level semantic features are effectively extracted to bridge the semantic gap between two different views. In addition, in order to efficiently fuse the decision result achieved by each view, a Siamese Scaling Network(SSN) is proposed instead of simply using a classifier. Experimental results show that our model achieves the superiority performance, on three public datasets, against others when all the views are available. Meanwhile, the degradation of performance is avoided under the case that some views are missing.
ISSN:0262-8856
1872-8138
DOI:10.1016/j.imavis.2021.104357