MSTCN: A multiscale temporal convolutional network for user independent human activity recognition [version 2; peer review: 2 approved, 1 approved with reservations]

Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as healthcare, sports, patient monitoring, etc. HAR approaches can be categorised as handcrafted feature methods (HCF) and deep learning methods (D...

Full description

Saved in:
Bibliographic Details
Published inF1000 research Vol. 10; p. 1261
Main Authors Raja Sekaran, Sarmela, Pang, Ying Han, Ling, Goh Fan, Yin, Ooi Shih
Format Journal Article
LanguageEnglish
Published London Faculty of 1000 Ltd 2022
F1000 Research Limited
F1000 Research Ltd
Subjects
Online AccessGet full text
ISSN2046-1402
2046-1402
DOI10.12688/f1000research.73175.2

Cover

Loading…
Abstract Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as healthcare, sports, patient monitoring, etc. HAR approaches can be categorised as handcrafted feature methods (HCF) and deep learning methods (DL). HCF involves complex data pre-processing and manual feature extraction in which the models may be exposed to high bias and crucial implicit pattern loss. Hence, DL approaches are introduced due to their exceptional recognition performance. Convolutional Neural Network (CNN) extracts spatial features while preserving localisation. However, it hardly captures temporal features. Recurrent Neural Network (RNN) learns temporal features, but it is susceptible to gradient vanishing and suffers from short-term memory problems. Unlike RNN, Long-Short Term Memory network has a relatively longer-term dependency. However, it consumes higher computation and memory because it computes and stores partial results at each level. Methods: This work proposes a novel multiscale temporal convolutional network (MSTCN) based on the Inception model with a temporal convolutional architecture. Unlike HCF methods, MSTCN requires minimal pre-processing and no manual feature engineering. Further, multiple separable convolutions with different-sized kernels are used in MSTCN for multiscale feature extraction. Dilations are applied to each separable convolution to enlarge the receptive fields without increasing the model parameters. Moreover, residual connections are utilised to prevent information loss and gradient vanishing. These features enable MSTCN to possess a longer effective history while maintaining a relatively low in-network computation. Results: The performance of MSTCN is evaluated on UCI and WISDM datasets using a subject independent protocol with no overlapping subjects between the training and testing sets. MSTCN achieves accuracies of 97.42 on UCI and 96.09 on WISDM. Conclusion: The proposed MSTCN dominates the other state-of-the-art methods by acquiring high recognition accuracies without requiring any manual feature engineering.
AbstractList Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as healthcare, sports, patient monitoring, etc. HAR approaches can be categorised as handcrafted feature methods (HCF) and deep learning methods (DL). HCF involves complex data pre-processing and manual feature extraction in which the models may be exposed to high bias and crucial implicit pattern loss. Hence, DL approaches are introduced due to their exceptional recognition performance. Convolutional Neural Network (CNN) extracts spatial features while preserving localisation. However, it hardly captures temporal features. Recurrent Neural Network (RNN) learns temporal features, but it is susceptible to gradient vanishing and suffers from short-term memory problems. Unlike RNN, Long-Short Term Memory network has a relatively longer-term dependency. However, it consumes higher computation and memory because it computes and stores partial results at each level. Methods: This work proposes a novel multiscale temporal convolutional network (MSTCN) based on the Inception model with a temporal convolutional architecture. Unlike HCF methods, MSTCN requires minimal pre-processing and no manual feature engineering. Further, multiple separable convolutions with different-sized kernels are used in MSTCN for multiscale feature extraction. Dilations are applied to each separable convolution to enlarge the receptive fields without increasing the model parameters. Moreover, residual connections are utilised to prevent information loss and gradient vanishing. These features enable MSTCN to possess a longer effective history while maintaining a relatively low in-network computation. Results: The performance of MSTCN is evaluated on UCI and WISDM datasets using a subject independent protocol with no overlapping subjects between the training and testing sets. MSTCN achieves accuracies of 97.42 on UCI and 96.09 on WISDM. Conclusion: The proposed MSTCN dominates the other state-of-the-art methods by acquiring high recognition accuracies without requiring any manual feature engineering.
Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as healthcare, sports, patient monitoring, etc. HAR approaches can be categorised as handcrafted feature methods (HCF) and deep learning methods (DL). HCF involves complex data pre-processing and manual feature extraction in which the models may be exposed to high bias and crucial implicit pattern loss. Hence, DL approaches are introduced due to their exceptional recognition performance. Convolutional Neural Network (CNN) extracts spatial features while preserving localisation. However, it hardly captures temporal features. Recurrent Neural Network (RNN) learns temporal features, but it is susceptible to gradient vanishing and suffers from short-term memory problems. Unlike RNN, Long-Short Term Memory network has a relatively longer-term dependency. However, it consumes higher computation and memory because it computes and stores partial results at each level. Methods: This work proposes a novel multiscale temporal convolutional network (MSTCN) based on the Inception model with a temporal convolutional architecture. Unlike HCF methods, MSTCN requires minimal pre-processing and no manual feature engineering. Further, multiple separable convolutions with different-sized kernels are used in MSTCN for multiscale feature extraction. Dilations are applied to each separable convolution to enlarge the receptive fields without increasing the model parameters. Moreover, residual connections are utilised to prevent information loss and gradient vanishing. These features enable MSTCN to possess a longer effective history while maintaining a relatively low in-network computation. Results: The performance of MSTCN is evaluated on UCI and WISDM datasets using a subject independent protocol with no overlapping subjects between the training and testing sets. MSTCN achieves accuracies of 97.42 on UCI and 96.09 on WISDM. Conclusion: The proposed MSTCN dominates the other state-of-the-art methods by acquiring high recognition accuracies without requiring any manual feature engineering.
Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as healthcare, sports, patient monitoring, etc. HAR approaches can be categorised as handcrafted feature methods (HCF) and deep learning methods (DL). HCF involves complex data pre-processing and manual feature extraction in which the models may be exposed to high bias and crucial implicit pattern loss. Hence, DL approaches are introduced due to their exceptional recognition performance. Convolutional Neural Network (CNN) extracts spatial features while preserving localisation. However, it hardly captures temporal features. Recurrent Neural Network (RNN) learns temporal features, but it is susceptible to gradient vanishing and suffers from short-term memory problems. Unlike RNN, Long-Short Term Memory network has a relatively longer-term dependency. However, it consumes higher computation and memory because it computes and stores partial results at each level. Methods: This work proposes a novel multiscale temporal convolutional network (MSTCN) based on the Inception model with a temporal convolutional architecture. Unlike HCF methods, MSTCN requires minimal pre-processing and no manual feature engineering. Further, multiple separable convolutions with different-sized kernels are used in MSTCN for multiscale feature extraction. Dilations are applied to each separable convolution to enlarge the receptive fields without increasing the model parameters. Moreover, residual connections are utilised to prevent information loss and gradient vanishing. These features enable MSTCN to possess a longer effective history while maintaining a relatively low in-network computation. Results: The performance of MSTCN is evaluated on UCI and WISDM datasets using a subject independent protocol with no overlapping subjects between the training and testing sets. MSTCN achieves accuracies of 97.42 on UCI and 96.09 on WISDM. Conclusion: The proposed MSTCN dominates the other state-of-the-art methods by acquiring high recognition accuracies without requiring any manual feature engineering.
Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as healthcare, sports, patient monitoring, etc. HAR approaches can be categorised as handcrafted feature methods (HCF) and deep learning methods (DL). HCF involves complex data pre-processing and manual feature extraction in which the models may be exposed to high bias and crucial implicit pattern loss. Hence, DL approaches are introduced due to their exceptional recognition performance. Convolutional Neural Network (CNN) extracts spatial features while preserving localisation. However, it hardly captures temporal features. Recurrent Neural Network (RNN) learns temporal features, but it is susceptible to gradient vanishing and suffers from short-term memory problems. Unlike RNN, Long-Short Term Memory network has a relatively longer-term dependency. However, it consumes higher computation and memory because it computes and stores partial results at each level. Methods: This work proposes a novel multiscale temporal convolutional network (MSTCN) based on the Inception model with a temporal convolutional architecture. Unlike HCF methods, MSTCN requires minimal pre-processing and no manual feature engineering. Further, multiple separable convolutions with different-sized kernels are used in MSTCN for multiscale feature extraction. Dilations are applied to each separable convolution to enlarge the receptive fields without increasing the model parameters. Moreover, residual connections are utilised to prevent information loss and gradient vanishing. These features enable MSTCN to possess a longer effective history while maintaining a relatively low in-network computation. Results: The performance of MSTCN is evaluated on UCI and WISDM datasets using a subject independent protocol with no overlapping subjects between the training and testing sets. MSTCN achieves accuracies of 97.42 on UCI and 96.09 on WISDM. Conclusion: The proposed MSTCN dominates the other state-of-the-art methods by acquiring high recognition accuracies without requiring any manual feature engineering.Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as healthcare, sports, patient monitoring, etc. HAR approaches can be categorised as handcrafted feature methods (HCF) and deep learning methods (DL). HCF involves complex data pre-processing and manual feature extraction in which the models may be exposed to high bias and crucial implicit pattern loss. Hence, DL approaches are introduced due to their exceptional recognition performance. Convolutional Neural Network (CNN) extracts spatial features while preserving localisation. However, it hardly captures temporal features. Recurrent Neural Network (RNN) learns temporal features, but it is susceptible to gradient vanishing and suffers from short-term memory problems. Unlike RNN, Long-Short Term Memory network has a relatively longer-term dependency. However, it consumes higher computation and memory because it computes and stores partial results at each level. Methods: This work proposes a novel multiscale temporal convolutional network (MSTCN) based on the Inception model with a temporal convolutional architecture. Unlike HCF methods, MSTCN requires minimal pre-processing and no manual feature engineering. Further, multiple separable convolutions with different-sized kernels are used in MSTCN for multiscale feature extraction. Dilations are applied to each separable convolution to enlarge the receptive fields without increasing the model parameters. Moreover, residual connections are utilised to prevent information loss and gradient vanishing. These features enable MSTCN to possess a longer effective history while maintaining a relatively low in-network computation. Results: The performance of MSTCN is evaluated on UCI and WISDM datasets using a subject independent protocol with no overlapping subjects between the training and testing sets. MSTCN achieves accuracies of 97.42 on UCI and 96.09 on WISDM. Conclusion: The proposed MSTCN dominates the other state-of-the-art methods by acquiring high recognition accuracies without requiring any manual feature engineering.
Author Raja Sekaran, Sarmela
Ling, Goh Fan
Pang, Ying Han
Yin, Ooi Shih
Author_xml – sequence: 1
  givenname: Sarmela
  orcidid: 0000-0002-6465-5503
  surname: Raja Sekaran
  fullname: Raja Sekaran, Sarmela
  organization: Faculty of Information Science and Technology, Multimedia University, Ayer Keroh, Melaka, 75450, Malaysia
– sequence: 2
  givenname: Ying Han
  orcidid: 0000-0002-3781-6623
  surname: Pang
  fullname: Pang, Ying Han
  email: yhpang@mmu.edu.my
  organization: Faculty of Information Science and Technology, Multimedia University, Ayer Keroh, Melaka, 75450, Malaysia
– sequence: 3
  givenname: Goh Fan
  surname: Ling
  fullname: Ling, Goh Fan
  organization: Millapp Sdn Bhd, Bangsar South, Kuala Lumpur, 59200, Malaysia
– sequence: 4
  givenname: Ooi Shih
  orcidid: 0000-0002-3024-1011
  surname: Yin
  fullname: Yin, Ooi Shih
  organization: Faculty of Information Science and Technology, Multimedia University, Ayer Keroh, Melaka, 75450, Malaysia
BookMark eNqFkk2P0zAQhiO0SCzL_gVkiQsHWvyRxHZBK60qPlZa4MByQshynEnrksTBTlL1B_E_cdpVRfeyF3s8fp9X9sw8T85a10KSvCR4TmguxNuKYIw9BNDerOecEZ7N6ZPknOI0n5EU07P_4mfJZQibCGApWU75efL3y_e75dcFukbNUPc2GF0D6qHpnNc1Mq4dXT301rXx1EK_df43qpxHQwCPbFtCB3Fpe7QeGt0ibXo72n6HPBi3au1Eop8j-DAF9B3qIHIeRgvbBaJId513I5RvEDnGaGv7NZq-5Ec9GYRfL5Knla4DXN7vF8mPjx_ulp9nt98-3Syvb2cmzQidCZrKMi94RnhhCNMMCOdCVFwWwLGsKOicMokpFhxP6VKwKicgK8w5pppdJDcH39Lpjeq8bbTfKaet2iecXynte2tqUBFg3BSUZ7JIhWSyZKLSecVwQajRJnpdHby6oWigNLFIsaQnpqc3rV2rlRuVlEJmaRoNXt8bePdngNCrJvYH6lq34IagGMkFzzJKeZS-eiDduMHHngVFuWQs_laQqMoPKuNdCB6q42MIVvtpUifTpPbTpGgE3z8Aje33rYkPt_Xj-OKAV9rEIdtNInVUPQL_A3oq6wI
CitedBy_id crossref_primary_10_1371_journal_pone_0304655
crossref_primary_10_1007_s10044_024_01319_3
Cites_doi 10.1016/j.mejo.2018.01.015
10.1145/1964897.1964918
10.1007/s10618-020-00710-y
10.1155/2020/5426532
10.1177/1550147716683687
10.1109/JBHI.2019.2909688
10.1016/j.asoc.2017.09.027
10.1109/ACCESS.2021.3078184
10.5120/ijca2015906733
10.3390/app10238482
10.1109/TETC.2018.2870047
10.1016/j.pmcj.2011.06.004
10.1016/j.eswa.2016.04.032
10.1109/ACCESS.2018.2890675
10.1109/JIOT.2018.2889966
ContentType Journal Article
Copyright Copyright: © 2022 Raja Sekaran S et al.
Copyright: © 2022 Raja Sekaran S et al. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright: © 2022 Raja Sekaran S et al. 2022
Copyright_xml – notice: Copyright: © 2022 Raja Sekaran S et al.
– notice: Copyright: © 2022 Raja Sekaran S et al. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
– notice: Copyright: © 2022 Raja Sekaran S et al. 2022
DBID C-E
CH4
AAYXX
CITATION
3V.
7X7
7XB
88I
8FE
8FH
8FI
8FJ
8FK
ABUWG
AFKRA
AZQEC
BBNVY
BENPR
BHPHI
CCPQU
DWQXO
FYUFA
GHDGH
GNUQQ
HCIFZ
K9.
LK8
M0S
M2P
M7P
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
Q9U
7X8
5PM
DOA
DOI 10.12688/f1000research.73175.2
DatabaseName F1000Research
Faculty of 1000
CrossRef
ProQuest Central (Corporate)
Health & Medical Collection
ProQuest Central (purchase pre-March 2016)
Science Database (Alumni Edition)
ProQuest SciTech Collection
ProQuest Natural Science Collection
ProQuest Hospital Collection
Hospital Premium Collection (Alumni Edition)
ProQuest Central (Alumni) (purchase pre-March 2016)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
Biological Science Database
ProQuest Central
Natural Science Collection
ProQuest One Community College
ProQuest Central
Health Research Premium Collection
Health Research Premium Collection (Alumni)
ProQuest Central Student
SciTech Premium Collection
ProQuest Health & Medical Complete (Alumni)
Biological Sciences
ProQuest Health & Medical Collection
ProQuest Science Database (NC LIVE)
Biological Science Database
ProQuest Central Premium
ProQuest One Academic
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
ProQuest Central Basic
MEDLINE - Academic
PubMed Central (Full Participant titles)
Directory of Open Access Journals (DOAJ)
DatabaseTitle CrossRef
Publicly Available Content Database
ProQuest Central Student
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest Health & Medical Complete (Alumni)
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Natural Science Collection
ProQuest Central China
ProQuest Central
ProQuest One Applied & Life Sciences
Health Research Premium Collection
Health and Medicine Complete (Alumni Edition)
Natural Science Collection
ProQuest Central Korea
Biological Science Collection
ProQuest Central (New)
ProQuest Science Journals (Alumni Edition)
ProQuest Biological Science Collection
ProQuest Central Basic
ProQuest Science Journals
ProQuest One Academic Eastern Edition
ProQuest Hospital Collection
Health Research Premium Collection (Alumni)
Biological Science Database
ProQuest SciTech Collection
ProQuest Hospital Collection (Alumni)
ProQuest Health & Medical Complete
ProQuest One Academic UKI Edition
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
MEDLINE - Academic
DatabaseTitleList CrossRef



MEDLINE - Academic
Publicly Available Content Database
Database_xml – sequence: 1
  dbid: DOA
  name: DOAJ Directory of Open Access Journals
  url: https://www.doaj.org/
  sourceTypes: Open Website
– sequence: 2
  dbid: BENPR
  name: ProQuest Central
  url: https://www.proquest.com/central
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Medicine
Women's Studies
Engineering
EISSN 2046-1402
ExternalDocumentID oai_doaj_org_article_77037cb2759b48939d38fa6f30b12cac
PMC9989544
10_12688_f1000research_73175_2
GrantInformation_xml – fundername: Fundamental Research Grant Scheme (FRGS) from Ministry of Education Malaysia
  grantid: FRGS/1/2020/ICT02/MMU/02/7
GroupedDBID 3V.
53G
5VS
7X7
88I
8FE
8FH
8FI
8FJ
AAFWJ
ABUWG
ACGOD
ACPRK
ADBBV
ADRAZ
AFKRA
AHMBA
ALMA_UNASSIGNED_HOLDINGS
AOIJS
AZQEC
BAWUL
BBNVY
BCNDV
BENPR
BHPHI
BPHCQ
BVXVI
C-E
CH4
DIK
DWQXO
FRP
FYUFA
GNUQQ
GROUPED_DOAJ
GX1
HCIFZ
HYE
KQ8
LK8
M2P
M48
M7P
M~E
OK1
PIMPY
PQEST
PQQKQ
PQUKI
PROAC
RPM
UKHRP
W2D
AAYXX
AFPKN
ALIPV
CCPQU
CITATION
HMCUK
PGMZT
PHGZM
PHGZT
7XB
8FK
K9.
PKEHL
PQGLB
PRINS
Q9U
7X8
5PM
PUEGO
ID FETCH-LOGICAL-c4512-8249d6b7517bc13a3e17788f79be709f2ea62390208708f79d83f61e9f07702a3
IEDL.DBID M48
ISSN 2046-1402
IngestDate Wed Aug 27 00:48:49 EDT 2025
Thu Aug 21 18:38:06 EDT 2025
Fri Jul 11 03:48:39 EDT 2025
Fri Jul 25 11:49:40 EDT 2025
Tue Jul 01 04:27:33 EDT 2025
Thu Apr 24 23:05:39 EDT 2025
Tue Mar 07 06:16:16 EST 2023
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Keywords one-dimensional inertial sensor
smartphone
dilated convolution
human activity recognition
temporal convolutional network
Language English
License http://creativecommons.org/licenses/by/4.0/: This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c4512-8249d6b7517bc13a3e17788f79be709f2ea62390208708f79d83f61e9f07702a3
Notes new_version
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
No competing interests were disclosed.
ORCID 0000-0002-6465-5503
0000-0002-3781-6623
0000-0002-3024-1011
OpenAccessLink https://www.proquest.com/docview/2793370881?pq-origsite=%requestingapplication%
PQID 2793370881
PQPubID 2045578
ParticipantIDs doaj_primary_oai_doaj_org_article_77037cb2759b48939d38fa6f30b12cac
pubmedcentral_primary_oai_pubmedcentral_nih_gov_9989544
proquest_miscellaneous_3168755227
proquest_journals_2793370881
crossref_primary_10_12688_f1000research_73175_2
crossref_citationtrail_10_12688_f1000research_73175_2
faculty1000_research_10_12688_f1000research_73175_2
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-00-00
PublicationDateYYYYMMDD 2022-01-01
PublicationDate_xml – year: 2022
  text: 2022-00-00
PublicationDecade 2020
PublicationPlace London
PublicationPlace_xml – name: London
– name: London, UK
PublicationTitle F1000 research
PublicationYear 2022
Publisher Faculty of 1000 Ltd
F1000 Research Limited
F1000 Research Ltd
Publisher_xml – name: Faculty of 1000 Ltd
– name: F1000 Research Limited
– name: F1000 Research Ltd
References A Anjum (ref14) 2013
M Ullah (ref11) Oct. 2019; 2019-October
S Yang (ref2) Apr. 2019; 6
Y Kim (ref29) 2016
A Ignatov (ref31) 2018; 62
Z Li (ref26) Jul. 2021
B Kolosnjaji (ref30) 2015; 9375 LNCS
J Huang (ref19) Jan. 2020; 24
C Ding (ref35) May 2017
C Xu (ref25) 2019; 7
K Peppas (ref34) 2020; 10
H Li (ref1) 2019; 88
S Yu (ref10) 2018
C Ronao (ref18) 2017; 13
N Sikder (ref33) 2019
M Lin (ref36) 2014
C Ronao (ref8) Oct. 2016; 59
M Ronald (ref28) 2021; 9
H Ismail Fawaz (ref24) Nov. 2020; 34
D Anguita (ref5) 2013
J Kwapisz (ref12) 2011; 12
F Garcia (ref22) 2019
S Seto (ref6) 2015
Y Lin (ref27) 2020; 2020
X Chen (ref3) 2019
J Wan (ref4) Jan. 2021; 9
Y Kee (ref13) Sep. 2020; 20
S Pienaar (ref20) 2019
G Ogbuabor (ref32) 2018
A Kumar (ref7) 2015; 127
C Ronao (ref17) 2014
O Yazdanbakhsh (ref9) 2019
N Nair (ref21) 2018
Z He (ref15) 2009
C Szegedy (ref23) 2015; 07-12-June
O Lara (ref16) 2012; 8
References_xml – volume: 88
  start-page: 164-172
  year: 2019
  ident: ref1
  article-title: Deep learning of smartphone sensor data for personal health assistance.
  publication-title: Microelectronics J.
  doi: 10.1016/j.mejo.2018.01.015
– volume: 12
  start-page: 74-82
  year: 2011
  ident: ref12
  article-title: Activity recognition using cell phone accelerometers.
  publication-title: ACM SIGKDD Explor. Newsl.
  doi: 10.1145/1964897.1964918
– start-page: 143-148
  year: Jul. 2021
  ident: ref26
  article-title: A lightweight mobile temporal convolution network for multi-location human activity recognition based on wi-fi.
  publication-title: 2021 IEEE/CIC Int. Conf. Commun. China, ICCC Work. 2021.
– volume: 9375 LNCS
  start-page: 378-386
  year: 2015
  ident: ref30
  article-title: Neural network-based user-independent physical activity recognition for mobile devices.
  publication-title: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
– year: May 2017
  ident: ref35
  article-title: L1-norm Error Function Robustness and Outlier Regularization.
  publication-title: arXiv
– volume: 34
  start-page: 1936-1962
  year: Nov. 2020
  ident: ref24
  article-title: InceptionTime: Finding AlexNet for time series classification.
  publication-title: Data Min. Knowl. Discov.
  doi: 10.1007/s10618-020-00710-y
– volume: 2020
  start-page: 1-10
  year: 2020
  ident: ref27
  article-title: A Novel Multichannel Dilated Convolution Neural Network for Human Activity Recognition.
  publication-title: Math. Probl. Eng.
  doi: 10.1155/2020/5426532
– year: 2014
  ident: ref36
  article-title: Network in network.
  publication-title: 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings.
– volume: 13
  start-page: 155014771668368
  year: 2017
  ident: ref18
  article-title: Recognizing human activities from smartphone sensors using hierarchical continuous hidden Markov models.
  publication-title: Int. J. Distrib. Sens. Networks.
  doi: 10.1177/1550147716683687
– start-page: 1399-1406
  year: 2015
  ident: ref6
  article-title: Multivariate time series classification using dynamic time warping template selection for human activity recognition.
  publication-title: Proceedings - 2015 IEEE Symposium Series on Computational Intelligence, SSCI 2015.
– start-page: 914-919
  year: 2013
  ident: ref14
  article-title: Activity recognition using smartphone sensors.
  publication-title: 2013 IEEE 10th Consumer Communications and Networking Conference, CCNC 2013.
– year: 2018
  ident: ref21
  article-title: Human activity recognition using temporal convolutional network.
  publication-title: ACM Int. Conf. Proceeding Ser.
– volume: 24
  start-page: 292-299
  year: Jan. 2020
  ident: ref19
  article-title: TSE-CNN: A Two-Stage End-to-End CNN for Human Activity Recognition.
  publication-title: IEEE J. Biomed. Heal. Informatics.
  doi: 10.1109/JBHI.2019.2909688
– volume: 62
  start-page: 915-922
  year: 2018
  ident: ref31
  article-title: Real-time human activity recognition from accelerometer data using Convolutional Neural Networks.
  publication-title: Appl. Soft Comput. J.
  doi: 10.1016/j.asoc.2017.09.027
– start-page: 5041-5044
  year: 2009
  ident: ref15
  article-title: Activity recognition from acceleration data based on discrete consine transform and SVM.
  publication-title: Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics.
– start-page: 121-125
  year: 2019
  ident: ref22
  article-title: Temporal approaches for human activity recognition using inertial sensors.
  publication-title: Proc. - 2019 Lat. Am. Robot. Symp. 2019 Brazilian Symp. Robot. 2019 Work. Robot. Educ. LARS/SBR/WRE 2019.
– start-page: 611-616
  year: 2019
  ident: ref3
  article-title: Detection of Falls with Smartphone Using Machine Learning Technique.
  publication-title: Proceedings - 2019 8th International Congress on Advanced Applied Informatics, IIAI-AAI 2019.
– volume: 07-12-June
  start-page: 1-9
  year: 2015
  ident: ref23
  article-title: Going deeper with convolutions.
  publication-title: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
– start-page: 41-46
  year: 2018
  ident: ref32
  article-title: Human activity recognition for healthcare using smartphones.
  publication-title: ACM Int. Conf. Proceeding Ser.
– start-page: 219-224
  year: 2018
  ident: ref10
  article-title: Human activity recognition with smartphone inertial sensors using bidir-LSTM networks.
  publication-title: Proc. - 2018 3rd Int. Conf. Mech. Control Comput. Eng. ICMCCE 2018.
– year: 2019
  ident: ref20
  article-title: Human Activity Recognition using LSTM-RNN Deep Neural Network Architecture.
  publication-title: 2019 IEEE 2nd Wireless Africa Conference, WAC 2019 - Proceedings.
– volume: 9
  start-page: 68985-69001
  year: 2021
  ident: ref28
  article-title: ISPLInception: An Inception-ResNet Deep Learning Architecture for Human Activity Recognition.
  publication-title: IEEE Access.
  doi: 10.1109/ACCESS.2021.3078184
– start-page: 3036-3041
  year: 2016
  ident: ref29
  article-title: Hidden Markov Model Ensemble for Activity Recognition Using Tri-Axis Accelerometer.
  publication-title: Proceedings - 2015 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2015.
– volume: 127
  start-page: 22-26
  year: 2015
  ident: ref7
  article-title: Human Activity Recognition through Smartphone’s Tri-Axial Accelerometer using Time Domain Wave Analysis and Machine Learning Simulation and Application performance evaluation using GPU through CUDA C & Deep Learning in TensorFlow View project Human Activi.
  publication-title: Artic. Int. J. Comput. Appl.
  doi: 10.5120/ijca2015906733
– start-page: 560-565
  year: 2019
  ident: ref33
  article-title: Human activity recognition using multichannel convolutional neural network.
  publication-title: 2019 5th International Conference on Advances in Electrical Engineering, ICAEE 2019.
– volume: 2019-October
  start-page: 175-180
  year: Oct. 2019
  ident: ref11
  article-title: Stacked Lstm Network for Human Activity Recognition Using Smartphone Data.
  publication-title: Proc. - Eur. Work. Vis. Inf. Process. EUVIP.
– volume: 20
  start-page: 64-74
  year: Sep. 2020
  ident: ref13
  article-title: Activity recognition on subject independent using machine learning.
  publication-title: Cybern. Inf. Technol.
– volume: 10
  start-page: 1-25
  year: 2020
  ident: ref34
  article-title: Real-time physical activity recognition on smart mobile devices using convolutional neural networks.
  publication-title: Appl. Sci.
  doi: 10.3390/app10238482
– year: 2019
  ident: ref9
  article-title: Multivariate Time Series Classification using Dilated Convolutional Neural Network.
  publication-title: arXiv.
– volume: 9
  start-page: 471-483
  year: Jan. 2021
  ident: ref4
  article-title: Time-Bounded Activity Recognition for Ambient Assisted Living.
  publication-title: IEEE Trans. Emerg. Top. Comput.
  doi: 10.1109/TETC.2018.2870047
– year: 2013
  ident: ref5
  article-title: A public domain dataset for human activity recognition using smartphones.
  publication-title: ESANN 2013 proceedings, 21st European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning.
– volume: 8
  start-page: 717-729
  year: 2012
  ident: ref16
  article-title: Centinela: A human activity recognition system based on acceleration and vital sign data.
  publication-title: Pervasive and Mobile Computing.
  doi: 10.1016/j.pmcj.2011.06.004
– volume: 59
  start-page: 235-244
  year: Oct. 2016
  ident: ref8
  article-title: Human activity recognition with smartphone sensors using deep learning neural networks.
  publication-title: Expert Syst. Appl.
  doi: 10.1016/j.eswa.2016.04.032
– volume: 7
  start-page: 9893-9902
  year: 2019
  ident: ref25
  article-title: InnoHAR: A deep neural network for complex human activity recognition.
  publication-title: IEEE Access.
  doi: 10.1109/ACCESS.2018.2890675
– volume: 6
  start-page: 3652-3662
  year: Apr. 2019
  ident: ref2
  article-title: IoT structured long-term wearable social sensing for mental wellbeing.
  publication-title: IEEE Internet Things J.
  doi: 10.1109/JIOT.2018.2889966
– start-page: 681-686
  year: 2014
  ident: ref17
  article-title: Human activity recognition using smartphone sensors with two-stage continuous hidden markov models.
  publication-title: 2014 10th International Conference on Natural Computation, ICNC 2014.
SSID ssj0000993627
Score 2.2470782
Snippet Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as...
Background: In recent years, human activity recognition (HAR) has been an active research topic due to its widespread application in various fields such as...
SourceID doaj
pubmedcentral
proquest
crossref
faculty1000
SourceType Open Website
Open Access Repository
Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 1261
SubjectTerms Classification
Deep learning
dilated convolution
eng
Engineering
human activity recognition
Long short-term memory
Neural networks
one-dimensional inertial sensor
smartphone
Smartphones
Support vector machines
temporal convolutional network
Temporal variations
Time series
SummonAdditionalLinks – databaseName: Directory of Open Access Journals (DOAJ)
  dbid: DOA
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV1RS9xAEB6KD1IfpNqWptUyhUJfGr3sZrOJPlmpSOF8qYJQypJNdtEiOWlOoT-o_7MzmzVcnu6lcA-5JHu57ExmvpnMfgPwsS6sd5XXhNycT3NyaamthEqVJHhCNtnmJa9Gnl8U51f5t2t1vdLqi2vCBnrgYeIONamkbqzQqrJMlFK1svR14eXMZqKpG7a-5PNWgqlfA-4hy6zjkmBRUJjnOZMdGXRuDjT7zQMx8UaBtH8LtnzNrBd_-PwJ7pxWTa64obMXsB3xI54M_3sHnrluFzbn8Q35LmyHlpSfeowFgi_h7_z75enFEZ5gKB7sSSgOIyPVHXLVedQ--tYNReFISBY5fYG3Y5fcJYZ2fsgLIbjfBI6lR4sOfzwOaTcUx3jvaNywIuYIBQbS8kfXfsZs3EbO_iLPUswI9z9fwdXZ18vT8zQ2Z0ibnEBCWlLc1hZWq0zbJpO1dJmmcNrryjo9q7xwNSGrinuA6hnvbkvpi4z0YkYiFbV8DRvdonNvAFvZlork2tBP5W2rLX0IhWipfW594RJQT0IyTWQu5wYad4YjGBaumQjXBOEakcDhOO5-4O5YO-IL68B4NnNvhx2kkSZqpFmnkQnIFQ0y4zXWXXrvSdNMNCO9EWQ9JU1fmSXwYTxMBoDf6tSdWzz0hjuPaUUwWiegJxo6uY3pke72JlCJU7BdqTx_-z_u-x08F7w2JOSn9mBj-fvB7RNiW9r34eH8B2wkQJA
  priority: 102
  providerName: Directory of Open Access Journals
– databaseName: Health & Medical Collection
  dbid: 7X7
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwhV3fb9MwED7BkBB9mKCACAxkJCSesjV2nEt4QWNimpC6Fzapb1ac2GzSlIyle-C_5851Q_PCpL40P-qkd7777nz-DuBTXVjvKo-E3JxPc3Jpqa2kTrUieEI22eYl70Zenhdnl_mPlV7FhNsQyyq3NjEY6rZvOEd-JEmRFNKcyL7e_k65axSvrsYWGo_hCVOXcUkXrnDMsRD6IfuMcWOwLCjY85zPjjw6V4fI3vNQTnxSoO6fwczXzH3xh6-foM9p7eSOMzp9DvsRRYrjjdhfwCPXzWG2wy04h6fLuGo-h_3QpvLzIGLR4Euwy58XJ-dfxLEIBYUDCcqJyFJ1I7gSPWokfes2heKC0K3glIa4HjvnrkVo8Sd4cwT3oBBjOVLfvYLL0-8XJ2dp7LaQNjl5_bSkQKwtLOoMbZOpWrkMKT72WFmHi8pLVxNUqripJ4mBDrel8kVGgl4gLmStXsNe13fuDYhWtaWuWtXQT-Vti5Y-BCtQoc-tL1wCevt_myZSkXNHjBvDIQnLyUzkZIKcjEzgaLzvdkPG8eAd31ic49VMph0O9He_TJybhh5fYWMl6soyFw89eenrwquFzWRTNwmoHWUw4xgPDX2wVRoT7cJg_mlxAh_H0zSjeZmm7lx_PxhuJYaacDEmgBNlm7zG9Ex3fRW4wSl6rnSev_3_4O_gmeRtHCGVdAB767t7957A1dp-CDPoL8FAJIw
  priority: 102
  providerName: ProQuest
Title MSTCN: A multiscale temporal convolutional network for user independent human activity recognition [version 2; peer review: 2 approved, 1 approved with reservations]
URI http://dx.doi.org/10.12688/f1000research.73175.2
https://www.proquest.com/docview/2793370881
https://www.proquest.com/docview/3168755227
https://pubmed.ncbi.nlm.nih.gov/PMC9989544
https://doaj.org/article/77037cb2759b48939d38fa6f30b12cac
Volume 10
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fa9RAEB5qC2IfilbF1BpWEHzKeckmmUQQaUtrEe4Q7cG9Ldlkt60cOb27QvvfO7O3FxoQK4SD_NrkdmZ2vtnMfgPwrsq1NaVFQm7GRim5tEiXSRZlkuAJjck6LXg18micn0_Sr9NsugWbcqm-A5d_De24ntRkMRvc_r77TAb_yXEj5BTBWZ6k9uQ4VwNklzigYXmHvBOysY485P-5RkQ0ZvMq6oRCwyh2CT7hv5vq-SxH7b8Lu7Zibow7vr6HTvu5lfec1dlT2PMoUxyt1eIZbJl2Hx6P_Hf0fdhzhSvfL4VPI3wOevTj4mT8URwJl2K4JNEZ4XmrZoJz072O0l67Th0XhHcFT3KI666W7kq4on-Cl0twVQrRJSjN2xcwOTu9ODmPfP2FqE4JB0QFhWZNrjGLUdexrKSJkSJmi6U2OCxtYioCTyWX-cQhH24KafOYRD9EHCaVfAnb7bw1r0A0simyspE1NZU2DWraCGigRJtqm5sAsk0Pq9qTk3ONjJniIIUlo3qSUU4yKgngQ3ffrzU9x4N3HLMAu6uZXtsdmC8ulbdWRa8vsdYJZqVmdh5688JWuZVDHSd1VQcg74lfdc946NGHGzVRG0VXCQ2QkrqviAN4250mG-cPN1Vr5jdLxcXFMCOkjAFgT716f6N_pr2-cmzhFE-XWZoe_Efrr-FJwqs73AzTIWyvFjfmDWGulQ7hEU4xhJ3j0_G376GbuaDfL9M4dOb1B5S7LeI
linkProvider Scholars Portal
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3dT9RAEJ_gkaj3QPTUWEVdE41Phetu272SGAMIOYS7GD0S3pZuuyskpEV6xPBP8Tc609vW64s8kfSln9t2ZudrZ-YH8CGNtTWJlWi5GeuHqNJ8nfDIjwSaJyiTdTiiauTJNB4fh99OopMVuG1qYSitspGJtaDOy4xi5JscGUlInBPBl8vfPqFG0epqA6GxYItDc_MHXbbq88FXpO9Hzvf3Zrtj36EK-FmI2s0focORx1pGgdRZIFJhAol-oJWJNnKYWG5SNAkSAq_E4fBwPhI2DvCDhlIOeSrwuQ9gNRToyvRgdWdv-v1HG9VBews1gnSlyDxG99JSBN117jnbkKSvN3hHC9ZgAX3o25S6bdzQ9R17t5utuaT-9p_AmrNb2faC0Z7CiikG0F_qZjiAhxO3Tj-AtRoY81PFXJriM9CTn7Pd6RbbZnUKY4WsYZjri3XBKPfdzQHcKxap6QztaUZBFHbeYvXOWQ0qyKgcg1AvWJsAVRbP4fheKPECekVZmJfAcpGPoiQXGT4qzHOpcUNDRgppQ21j40HU_G-VuebnhMFxocgJIjqpDp1UTSfFPdhs77tctP-4844dImd7NbXvrg-UV7-UkwYKX1_ITHMZJZq6_-Cbj2waWzHUAc_SzAOxxAyqHeOuodcbplFOElXq37zx4H17GmUILQylhSmvK0XgZTJCS1x6IDvM1vmM7pni_KzuRo7-ehKF4av_D_4OHo1nkyN1dDA9fA2PORWR1IGsdejNr67NGzTt5vqtm08MTu97Cv8FJ8ZgDQ
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9NAEB6VIlXkUEEAYVpgkUCc3MS7tidGQqi0RC0lERKtlNvWa-_SSpVTmlRV_xq_jhlnY-ILPVXyxc-1Pe_Z2fkA3uWpcTZzSJ6bdWFMJi00mUzCRJF7QjrZxANejTwapwcn8bdJMlmDP8u1MFxWudSJtaIupwXnyHuSGEkhyUTUc74s4sf-8PPl75ARpHimdQmnsWCRI3t7Q-Hb7NPhPtH6vZTDr8d7B6FHGAiLmCxdOKDgo0wNJhGaIlK5shFSTOgwMxb7mZM2J_cgYyBLGpoOlwPl0og-ro_Yl7mi5z6Ah6iSiGUMJ9jkd8jzItuAflGyTCnQdJxL9z18znaQLfeObNnDGjagAx2Xc9-NW76-5fm26zZXDOHwMWx6D1bsLljuCazZqgudlb6GXdgY-Rn7LmzWEJkfZsIXLD4FM_p5vDf-KHZFXcw4IyaxwnfIuhBcBe-lgfaqRZG6IM9acDpFnDeovXNRwwsKXpjB-BeiKYWaVs_g5F7o8BzWq2llX4AoVTlIslIV9Ki4LNHQRi4NKnSxcakNIFn-b134NuiMxnGhORxiOukWnXRNJy0D6DX3XS4agdx5xxcmZ3M1N_KuD0yvfmmvFzS9vsLCSEwyw32A6M0HLk-d6ptIFnkRgFphBt2McdfQ20um0V4nzfQ_CQrgbXOatAlPEeWVnV7PNMOYYUI-OQaALWZrfUb7THV-Vvclp8g9S-L45f8HfwMbJLj6--H4aAseSV5NUme0tmF9fnVtX5GPNzeva2EScHrf0vsXp_Ji3Q
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MSTCN%3A+A+multiscale+temporal+convolutional+network+for+user+independent+human+activity+recognition&rft.jtitle=F1000+research&rft.au=Raja+Sekaran%2C+Sarmela&rft.au=Pang%2C+Ying+Han&rft.au=Ling%2C+Goh+Fan&rft.au=Yin%2C+Ooi+Shih&rft.date=2022&rft.issn=2046-1402&rft.eissn=2046-1402&rft.volume=10&rft.spage=1261&rft_id=info:doi/10.12688%2Ff1000research.73175.2&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2046-1402&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2046-1402&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2046-1402&client=summon