ASM2TV: An Adaptive Semi-Supervised Multi-Task Multi-View Learning Framework for Human Activity Recognition
Many real-world scenarios, such as human activity recognition (HAR) in IoT, can be formalized as a multi-task multi-view learning problem. Each specific task consists of multiple shared feature views collected from multiple sources, either homogeneous or heterogeneous. Common among recent approaches...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
18.05.2021
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2105.08643 |
Cover
Summary: | Many real-world scenarios, such as human activity recognition (HAR) in IoT,
can be formalized as a multi-task multi-view learning problem. Each specific
task consists of multiple shared feature views collected from multiple sources,
either homogeneous or heterogeneous. Common among recent approaches is to
employ a typical hard/soft sharing strategy at the initial phase separately for
each view across tasks to uncover common knowledge, underlying the assumption
that all views are conditionally independent. On the one hand, multiple views
across tasks possibly relate to each other under practical situations. On the
other hand, supervised methods might be insufficient when labeled data is
scarce. To tackle these challenges, we introduce a novel framework ASM2TV for
semi-supervised multi-task multi-view learning. We present a new perspective
named gating control policy, a learnable task-view-interacted sharing policy
that adaptively selects the most desirable candidate shared block for any view
across any task, which uncovers more fine-grained task-view-interacted
relatedness and improves inference efficiency. Significantly, our proposed
gathering consistency adaption procedure takes full advantage of large amounts
of unlabeled fragmented time-series, making it a general framework that
accommodates a wide range of applications. Experiments on two diverse
real-world HAR benchmark datasets collected from various subjects and sources
demonstrate our framework's superiority over other state-of-the-arts. The
detailed codes are available at https://github.com/zachstarkk/ASM2TV. |
---|---|
DOI: | 10.48550/arxiv.2105.08643 |