Representing Videos as Discriminative Sub-graphs for Action Recognition
Human actions are typically of combinatorial structures or patterns, i.e., subjects, objects, plus spatio-temporal interactions in between. Discovering such structures is therefore a rewarding way to reason about the dynamics of interactions and recognize the actions. In this paper, we introduce a n...
Saved in:
Main Authors | , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
11.01.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Human actions are typically of combinatorial structures or patterns, i.e.,
subjects, objects, plus spatio-temporal interactions in between. Discovering
such structures is therefore a rewarding way to reason about the dynamics of
interactions and recognize the actions. In this paper, we introduce a new
design of sub-graphs to represent and encode the discriminative patterns of
each action in the videos. Specifically, we present MUlti-scale Sub-graph
LEarning (MUSLE) framework that novelly builds space-time graphs and clusters
the graphs into compact sub-graphs on each scale with respect to the number of
nodes. Technically, MUSLE produces 3D bounding boxes, i.e., tubelets, in each
video clip, as graph nodes and takes dense connectivity as graph edges between
tubelets. For each action category, we execute online clustering to decompose
the graph into sub-graphs on each scale through learning Gaussian Mixture Layer
and select the discriminative sub-graphs as action prototypes for recognition.
Extensive experiments are conducted on both Something-Something V1 & V2 and
Kinetics-400 datasets, and superior results are reported when comparing to
state-of-the-art methods. More remarkably, our MUSLE achieves to-date the best
reported accuracy of 65.0% on Something-Something V2 validation set. |
---|---|
DOI: | 10.48550/arxiv.2201.04027 |