Action Anticipation with RBF Kernelized Feature Mapping RNN
We introduce a novel Recurrent Neural Network-based algorithm for future video feature generation and action anticipation called feature mapping RNN. Our novel RNN architecture builds upon three effective principles of machine learning, namely parameter sharing, Radial Basis Function kernels and adv...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.11.2019
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | We introduce a novel Recurrent Neural Network-based algorithm for future
video feature generation and action anticipation called feature mapping RNN.
Our novel RNN architecture builds upon three effective principles of machine
learning, namely parameter sharing, Radial Basis Function kernels and
adversarial training. Using only some of the earliest frames of a video, the
feature mapping RNN is able to generate future features with a fraction of the
parameters needed in traditional RNN. By feeding these future features into a
simple multi-layer perceptron facilitated with an RBF kernel layer, we are able
to accurately predict the action in the video. In our experiments, we obtain
18% improvement on JHMDB-21 dataset, 6% on UCF101-24 and 13% improvement on
UT-Interaction datasets over prior state-of-the-art for action anticipation. |
---|---|
DOI: | 10.48550/arxiv.1911.07806 |