A Novel CNN-based Bi-LSTM parallel model with attention mechanism for human activity recognition with noisy data
Boosted by mobile communication technologies, Human Activity Recognition (HAR) based on smartphones has attracted more and more attentions of researchers. One of the main challenges is the classification time and accuracy in processing long-time dependent sequence samples with noisy or missed data....
Saved in:
Published in | Scientific reports Vol. 12; no. 1; p. 7878 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
London
Nature Publishing Group UK
12.05.2022
Nature Publishing Group Nature Portfolio |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Boosted by mobile communication technologies, Human Activity Recognition (HAR) based on smartphones has attracted more and more attentions of researchers. One of the main challenges is the classification time and accuracy in processing long-time dependent sequence samples with noisy or missed data. In this paper, a 1-D Convolution Neural Network (CNN)-based bi-directional Long Short-Term Memory (LSTM) parallel model with attention mechanism (ConvBLSTM-PMwA) is proposed. The original features of sensors are segmented into sub-segments by well-designed equal time step sliding window, and fed into 1-D CNN-based bi-directional LSTM parallel layer to accelerate feature extraction with noisy and missed data. The weights of extracted features are redistributed by attention mechanism and integrated into complete features. At last, the final classification results are obtained with the full connection layer. The performance is evaluated on public UCI and WISDM HAR datasets. The results show that the ConvBLSTM-PMwA model performs better than the existing CNN and RNN models in both classification accuracy (96.71%) and computational time complexity (1.1 times faster at least), even if facing HAR data with noise. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 2045-2322 2045-2322 |
DOI: | 10.1038/s41598-022-11880-8 |