Automatic human posture estimation for sport activity recognition with robust body parts detection and entropy markov model

Automated human posture estimation (A-HPE) systems need delicate methods for detecting body parts and selecting cues based on marker-less sensors to effectively recognize complex activity motions. Recognition of human activities using vision sensors is a challenging issue due to variations in illumi...

Full description

Saved in:
Bibliographic Details
Published inMultimedia tools and applications Vol. 80; no. 14; pp. 21465 - 21498
Main Authors Nadeem, Amir, Jalal, Ahmad, Kim, Kibum
Format Journal Article
LanguageEnglish
Published New York Springer US 01.06.2021
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Automated human posture estimation (A-HPE) systems need delicate methods for detecting body parts and selecting cues based on marker-less sensors to effectively recognize complex activity motions. Recognition of human activities using vision sensors is a challenging issue due to variations in illumination conditions and complex movements during the monitoring of sports and fitness exercises. In this paper, we propose a novel A-HPE method that intelligently identifies human behaviours by utilizing saliency silhouette detection, robust body parts model and multidimensional cues from full-body silhouettes followed by an entropy Markov model. Initially, images are pre-processed and noise is removed to obtain a robust silhouette. Body parts models are then used to extract twelve key body parts. These key body parts are further optimized to assist the generation of multidimensional cues. These cues include energy, optical flow and distinctive values that are fed into quadratic discriminant analysis to discriminate cues which help in the recognition of actions. Finally, these optimized patterns are further processed by a maximum entropy Markov model as a recognizer engine based on transition and emission probability values for activity recognition. For evaluation, we used a leave-one-out cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving better body parts detection and higher recognition accuracy over four benchmark datasets. The proposed method will be useful for man-machine interactions such as 3D interactive games, virtual reality, service robots, e-health fitness, and security surveillance. Graphical Abstract Design model of automatic posture estimation and action recognition.
AbstractList Automated human posture estimation (A-HPE) systems need delicate methods for detecting body parts and selecting cues based on marker-less sensors to effectively recognize complex activity motions. Recognition of human activities using vision sensors is a challenging issue due to variations in illumination conditions and complex movements during the monitoring of sports and fitness exercises. In this paper, we propose a novel A-HPE method that intelligently identifies human behaviours by utilizing saliency silhouette detection, robust body parts model and multidimensional cues from full-body silhouettes followed by an entropy Markov model. Initially, images are pre-processed and noise is removed to obtain a robust silhouette. Body parts models are then used to extract twelve key body parts. These key body parts are further optimized to assist the generation of multidimensional cues. These cues include energy, optical flow and distinctive values that are fed into quadratic discriminant analysis to discriminate cues which help in the recognition of actions. Finally, these optimized patterns are further processed by a maximum entropy Markov model as a recognizer engine based on transition and emission probability values for activity recognition. For evaluation, we used a leave-one-out cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving better body parts detection and higher recognition accuracy over four benchmark datasets. The proposed method will be useful for man-machine interactions such as 3D interactive games, virtual reality, service robots, e-health fitness, and security surveillance.
Automated human posture estimation (A-HPE) systems need delicate methods for detecting body parts and selecting cues based on marker-less sensors to effectively recognize complex activity motions. Recognition of human activities using vision sensors is a challenging issue due to variations in illumination conditions and complex movements during the monitoring of sports and fitness exercises. In this paper, we propose a novel A-HPE method that intelligently identifies human behaviours by utilizing saliency silhouette detection, robust body parts model and multidimensional cues from full-body silhouettes followed by an entropy Markov model. Initially, images are pre-processed and noise is removed to obtain a robust silhouette. Body parts models are then used to extract twelve key body parts. These key body parts are further optimized to assist the generation of multidimensional cues. These cues include energy, optical flow and distinctive values that are fed into quadratic discriminant analysis to discriminate cues which help in the recognition of actions. Finally, these optimized patterns are further processed by a maximum entropy Markov model as a recognizer engine based on transition and emission probability values for activity recognition. For evaluation, we used a leave-one-out cross validation scheme and the results outperformed existing well-known statistical state-of-the-art methods by achieving better body parts detection and higher recognition accuracy over four benchmark datasets. The proposed method will be useful for man-machine interactions such as 3D interactive games, virtual reality, service robots, e-health fitness, and security surveillance. Graphical Abstract Design model of automatic posture estimation and action recognition.
Author Nadeem, Amir
Kim, Kibum
Jalal, Ahmad
Author_xml – sequence: 1
  givenname: Amir
  surname: Nadeem
  fullname: Nadeem, Amir
  organization: Air University
– sequence: 2
  givenname: Ahmad
  surname: Jalal
  fullname: Jalal, Ahmad
  organization: Air University
– sequence: 3
  givenname: Kibum
  surname: Kim
  fullname: Kim, Kibum
  email: kibum@hanyang.ac.kr
  organization: Department of Human-Computer Interaction, Hanyang University
BookMark eNp9kM9PwyAUx4nRxG36D3gi8Vzl0VK647L4K1niRc-EFtg6N6hAZxr_ebvWxMTDThDe98N77zNF59ZZjdANkDsghN8HAJLRhFBIgOQFT9gZmgDjacI5hfP-nhYk4YzAJZqGsCUEckazCfpetNHtZawrvGn30uLGhdh6jXWI9fHdWWycx6FxPmJZxfpQxw57Xbm1rYfyVx032LuyDRGXTnW4kT4GrHTU1RCQVmFto3dNh_fSf7gD3juld1fowshd0Ne_5wy9Pz68LZ-T1evTy3KxSqoU5jGhkhlCTV5lqeZKFaXJuOGGEpJpraiiRM2LTAGokmYGcuC5KZmZcyMNLTlJZ-h2_Lfx7rPtFxNb13rbtxSUpVBkLJ_TPkXHVOVdCF4b0fjegO8EEHGULEbJopcsBsmC9VDxD6rqOFiLXta702g6oqHvY9fa_011gvoBae-WqQ
CitedBy_id crossref_primary_10_1155_2022_6447471
crossref_primary_10_1007_s00138_023_01431_0
crossref_primary_10_3390_s22030906
crossref_primary_10_3390_s23104716
crossref_primary_10_32604_cmc_2023_032245
crossref_primary_10_1155_2022_7835241
crossref_primary_10_1038_s41598_024_57912_3
crossref_primary_10_32604_cmc_2022_023841
crossref_primary_10_3390_su152014780
crossref_primary_10_1134_S0361768823080066
crossref_primary_10_3390_e23050628
crossref_primary_10_1155_2021_5183088
crossref_primary_10_1007_s13369_022_07092_x
crossref_primary_10_32604_cmc_2023_029629
crossref_primary_10_3390_s23156839
crossref_primary_10_61927_igmin123
crossref_primary_10_1109_ACCESS_2023_3293537
crossref_primary_10_3390_app12105196
crossref_primary_10_3390_s25020389
crossref_primary_10_1109_ACCESS_2025_3526710
crossref_primary_10_1080_0954898X_2024_2438967
crossref_primary_10_1155_2022_7133491
crossref_primary_10_1109_ACCESS_2024_3473828
crossref_primary_10_1109_ACCESS_2025_3526476
crossref_primary_10_1080_08839514_2024_2321551
crossref_primary_10_1155_2022_4727375
crossref_primary_10_1109_ACCESS_2024_3524431
crossref_primary_10_1016_j_engappai_2023_105855
crossref_primary_10_32604_iasc_2023_027205
crossref_primary_10_1016_j_aej_2023_04_062
crossref_primary_10_32604_csse_2023_034431
crossref_primary_10_32604_iasc_2022_025421
crossref_primary_10_1155_2022_6761857
crossref_primary_10_3390_app11094153
crossref_primary_10_1016_j_heliyon_2024_e25465
crossref_primary_10_1109_JSTARS_2024_3389072
crossref_primary_10_1007_s11042_023_17407_1
crossref_primary_10_1109_ACCESS_2023_3314341
crossref_primary_10_1007_s42979_024_03345_8
crossref_primary_10_1109_ACCESS_2023_3317893
crossref_primary_10_3390_electronics13071286
crossref_primary_10_1109_ACCESS_2022_3154775
crossref_primary_10_1155_2022_2372160
crossref_primary_10_1016_j_knosys_2025_113272
crossref_primary_10_2478_amns_2025_0294
crossref_primary_10_1016_j_cviu_2024_104258
crossref_primary_10_32604_iasc_2022_025013
crossref_primary_10_1155_2022_7581079
crossref_primary_10_3390_mi14122204
crossref_primary_10_1109_JSEN_2023_3314728
crossref_primary_10_3233_JIFS_230505
crossref_primary_10_3390_app12136481
crossref_primary_10_3390_s23177363
crossref_primary_10_3390_s24103032
crossref_primary_10_1155_2022_5794914
crossref_primary_10_12677_aps_2024_125120
crossref_primary_10_3390_s23187925
crossref_primary_10_1155_2021_3772358
crossref_primary_10_1016_j_entcom_2024_100793
crossref_primary_10_1515_jisys_2022_0096
crossref_primary_10_1155_2022_2474047
Cites_doi 10.1109/ICMLC.2013.6890422
10.1109/IJCNN.2018.8489386
10.1109/TIP.2017.2718189
10.1109/ICOSP.2006.345837
10.1109/ICDSP.2016.7868599
10.1109/TIP.2015.2512107
10.1109/CVPRW.2012.6239233
10.1109/TCE.2012.6311329
10.1016/j.patrec.2017.02.001
10.1109/CVPR.2012.6247806
10.1109/WACV.2019.00015
10.1109/JSEN.2017.2720725
10.1109/TIP.2018.2836323
10.1007/s11042-019-08527-8
10.3390/s140711735
10.1016/j.asoc.2017.09.027
10.3390/e22080817
10.1016/j.aej.2020.01.015
10.1016/j.patcog.2008.03.018
10.1007/978-3-319-11430-9
10.1007/s13369-016-2158-7
10.1109/TCSVT.2011.2130270
10.3390/s20143871
10.3390/e22050579
10.1109/ICCCNT.2014.6963015
10.1007/11744023
10.1504/IJHM.2019.098949
10.1109/AVSS.2014.6918695
10.1109/CVPR.2014.471
10.1016/j.jksuci.2019.09.004
10.1109/JSYST.2016.2610188
10.1109/ICACCS.2019.8728328
10.1109/C-CODE.2019.8680993
10.1007/978--3--642--33786--4
10.1007/s10586-017-1435-x
10.1007/s11042-018-6068-4
10.1016/j.cviu.2006.07.013
10.1016/j.trit.2016.03.001
10.1109/ITNG.2012.132
10.1007/s00371-015-1066-2
10.1504/IJHM.2019.104386
10.1049/trit.2019.0002
10.1049/trit.2019.0017
10.5244/C.24.12
10.1016/j.engappai.2018.04.002
10.1109/CVPR.2015.7298894
10.1504/IJHM.2019.098951
10.1016/j.patcog.2017.02.030
10.1007/s11042-019-08463-7
10.1177/1420326X12469714
10.1109/JSEN.2018.2833745
10.1109/FIT.2018.00045
10.5244/C.23.28
10.1109/TRO.2014.2378451
10.1007/s11042-016-3723-5
10.1109/JSEN.2018.2869807
10.1049/trit.2019.0036
10.1109/VS.1999.780265
10.1109/TIP.2019.2912357
10.1109/ICACS47775.2020.9055951
10.1109/JSEN.2018.2837674
ContentType Journal Article
Copyright The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021
The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021.
Copyright_xml – notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021
– notice: The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature 2021.
DBID AAYXX
CITATION
3V.
7SC
7WY
7WZ
7XB
87Z
8AL
8AO
8FD
8FE
8FG
8FK
8FL
8G5
ABUWG
AFKRA
ARAPS
AZQEC
BENPR
BEZIV
BGLVJ
CCPQU
DWQXO
FRNLG
F~G
GNUQQ
GUQSH
HCIFZ
JQ2
K60
K6~
K7-
L.-
L7M
L~C
L~D
M0C
M0N
M2O
MBDVC
P5Z
P62
PHGZM
PHGZT
PKEHL
PQBIZ
PQBZA
PQEST
PQGLB
PQQKQ
PQUKI
Q9U
DOI 10.1007/s11042-021-10687-5
DatabaseName CrossRef
ProQuest Central (Corporate)
Computer and Information Systems Abstracts
ABI/INFORM Collection
ABI/INFORM Global (PDF only)
ProQuest Central (purchase pre-March 2016)
ABI/INFORM Collection
Computing Database (Alumni Edition)
ProQuest Pharma Collection
Technology Research Database
ProQuest SciTech Collection
ProQuest Technology Collection
ProQuest Central (Alumni) (purchase pre-March 2016)
ABI/INFORM Collection (Alumni)
ProQuest Research Library
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Databases
Business Premium Collection
Technology Collection
ProQuest One Community College
ProQuest Central Korea
Business Premium Collection (Alumni)
ABI/INFORM Global (Corporate)
ProQuest Central Student
ProQuest Research Library
SciTech Premium Collection
ProQuest Computer Science Collection
ProQuest Business Collection (Alumni Edition)
ProQuest Business Collection
Computer Science Database
ABI/INFORM Professional Advanced
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
ABI/INFORM Global
Computing Database
Research Library
Research Library (Corporate)
Advanced Technologies & Aerospace Database
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Premium
ProQuest One Academic
ProQuest One Academic Middle East (New)
One Business
ProQuest One Business (Alumni)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central Basic
DatabaseTitle CrossRef
ABI/INFORM Global (Corporate)
ProQuest Business Collection (Alumni Edition)
ProQuest One Business
Research Library Prep
Computer Science Database
ProQuest Central Student
Technology Collection
Technology Research Database
Computer and Information Systems Abstracts – Academic
ProQuest One Academic Middle East (New)
ProQuest Advanced Technologies & Aerospace Collection
ProQuest Central Essentials
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
Research Library (Alumni Edition)
ProQuest Pharma Collection
ABI/INFORM Complete
ProQuest Central
ABI/INFORM Professional Advanced
ProQuest One Applied & Life Sciences
ProQuest Central Korea
ProQuest Research Library
ProQuest Central (New)
Advanced Technologies Database with Aerospace
ABI/INFORM Complete (Alumni Edition)
Advanced Technologies & Aerospace Collection
Business Premium Collection
ABI/INFORM Global
ProQuest Computing
ABI/INFORM Global (Alumni Edition)
ProQuest Central Basic
ProQuest Computing (Alumni Edition)
ProQuest One Academic Eastern Edition
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Business Collection
Computer and Information Systems Abstracts Professional
Advanced Technologies & Aerospace Database
ProQuest One Academic UKI Edition
ProQuest One Business (Alumni)
ProQuest One Academic
ProQuest One Academic (New)
ProQuest Central (Alumni)
Business Premium Collection (Alumni)
DatabaseTitleList ABI/INFORM Global (Corporate)

Database_xml – sequence: 1
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1573-7721
EndPage 21498
ExternalDocumentID 10_1007_s11042_021_10687_5
GroupedDBID -4Z
-59
-5G
-BR
-EM
-Y2
-~C
.4S
.86
.DC
.VR
06D
0R~
0VY
123
1N0
1SB
2.D
203
28-
29M
2J2
2JN
2JY
2KG
2LR
2P1
2VQ
2~H
30V
3EH
3V.
4.4
406
408
409
40D
40E
5QI
5VS
67Z
6NX
7WY
8AO
8FE
8FG
8FL
8G5
8UJ
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AAOBN
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDZT
ABECU
ABFTV
ABHLI
ABHQN
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFO
ACGFS
ACHSB
ACHXU
ACKNC
ACMDZ
ACMLO
ACOKC
ACOMO
ACPIV
ACREN
ACSNA
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADMLS
ADRFC
ADTPH
ADURQ
ADYFF
ADYOE
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFYQB
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMTXH
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
BA0
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EBLON
EBS
EIOEI
EJD
ESBYG
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GUQSH
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
I-F
I09
IHE
IJ-
IKXTQ
ITG
ITH
ITM
IWAJR
IXC
IXE
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K60
K6V
K6~
K7-
KDC
KOV
KOW
LAK
LLZTM
M0C
M0N
M2O
M4Y
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
OVD
P19
P2P
P62
P9O
PF0
PQBIZ
PQBZA
PQQKQ
PROAC
PT4
PT5
Q2X
QOK
QOS
R4E
R89
R9I
RHV
RNI
RNS
ROL
RPX
RSV
RZC
RZE
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCJ
SCLPG
SCO
SDH
SDM
SHX
SISQX
SJYHP
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TEORI
TH9
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WK8
YLTOR
Z45
Z7R
Z7S
Z7W
Z7X
Z7Y
Z7Z
Z81
Z83
Z86
Z88
Z8M
Z8N
Z8Q
Z8R
Z8S
Z8T
Z8U
Z8W
Z92
ZMTXR
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ACMFV
ACSTC
ADHKG
ADKFA
AEZWR
AFDZB
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
7SC
7XB
8AL
8FD
8FK
ABRTQ
JQ2
L.-
L7M
L~C
L~D
MBDVC
PKEHL
PQEST
PQGLB
PQUKI
Q9U
ID FETCH-LOGICAL-c319t-2a5f02f6c43e7dd8bf47f7f2004eed2d20d984d11db24f16176fb5f97faf2b703
IEDL.DBID BENPR
ISSN 1380-7501
IngestDate Sat Jul 26 00:01:38 EDT 2025
Tue Jul 01 04:13:09 EDT 2025
Thu Apr 24 22:54:54 EDT 2025
Fri Feb 21 02:48:54 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 14
Keywords Entropy Markov model
Multidimensional cues
Body parts detection
Sports activity recognition
Posture estimation
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-2a5f02f6c43e7dd8bf47f7f2004eed2d20d984d11db24f16176fb5f97faf2b703
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
PQID 2531845692
PQPubID 54626
PageCount 34
ParticipantIDs proquest_journals_2531845692
crossref_primary_10_1007_s11042_021_10687_5
crossref_citationtrail_10_1007_s11042_021_10687_5
springer_journals_10_1007_s11042_021_10687_5
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20210600
2021-06-00
20210601
PublicationDateYYYYMMDD 2021-06-01
PublicationDate_xml – month: 6
  year: 2021
  text: 20210600
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
– name: Dordrecht
PublicationSubtitle An International Journal
PublicationTitle Multimedia tools and applications
PublicationTitleAbbrev Multimed Tools Appl
PublicationYear 2021
Publisher Springer US
Springer Nature B.V
Publisher_xml – name: Springer US
– name: Springer Nature B.V
References Shebiah RN, Sangari AA (2019) Classification of human body parts using histogram of oriented gradients. Proceedings of ICACCS. https://doi.org/10.1109/ICACCS.2019.8728328
Andriluka M, Pishchulin L, Gehler P, Schiele (2014) 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/CVPR.2014.471
Chen H, Mcgurr M (2014) Improved color and intensity patch segmentation for human full–body and body–parts detection and tracking. IEEE: Proceedings of AVSS. https://doi.org/10.1109/AVSS.2014.6918695
Tingting Y, Junqian W, Lintai W et al (2019) Three–stage network for age estimation, CAAI Transactions on Intelligence Technology 4(2)
Johnson E (2010) Clustered Pose and Non–linear Appearance Models for Human Pose Estimation. Proceedings of the British Machine Vision Conference. https://doi.org/10.5244/C.24.12
Jalal A, Nadeem A, Bobasu S (2019) Human body parts estimation and detection for physical sports movements. IEEE International Conference on Communication, Computing and Digital Systems
Han Y, Chung S, Ambikapathi A, Chan J, Lin W, Su S (2018) Robust human action recognition using global spatial–temporal attention for human skeleton data. Proceedings of IJCNN. DOI: https://doi.org/10.1109/IJCNN.2018.8489386
San-Segundo R, Blunck H, Moreno-Pimentel J, Stisen A, Gil-Martín M (2018) Robust Human Activity Recognition using smartwatches and smartphones. Eng Appl Artif Intell 72:190–202
Jalal A, Kim Y, Kim D (2014) Ridge body parts features for human pose estimation and recognition from RGB–D video data. Proceedings of the IEEE International Conference on computing, communication and networking technologies, pp. 1–6
Sun Y et al (2020) Intelligent human computer interaction based on non redundant EMG signal. Alexandria Engineering Journal https://doi.org/10.1016/j.aej.2020.01.015
Fei M, Ju Z, Zhen X, Li J (2017) Real-time Visual Tracking based on Improved Perceptual Hashing [J]. Multimed Tools Appl 76(3):4617–4634
Mahmood M, Jalal A, Kim K (2020) WHITE STAG Model: Wise Human Interaction Tracking and Estimation (WHITE) using Spatio–temporal and Angular–geometric (STAG) Descriptors, Multimedia Tools and Applications
Al-Ghannam R, Al–Dossari H (2016) Prayer Activity Monitoring and Recognition Using Acceleration Features with Mobile Phone. Arabian J Sci Eng 41:4967–4979
Daniel W, Remi R, Edmond B (2006) Free Viewpoint Action Recognition using Motion History Volumes. Comput Vis Image Underst (CVIU) 104:249–257
Liu F, Xu X, Qiu S, Qing C, Tao D (2016) Simple to complex transfer learning for action recognition. IEEE Trans Image Process 25:949–960
Ignatov A (2018) Real–time human activity recognition from accelerometer data using Convolutional Neural Networks. Appl Soft Comput 62:915–922
Jalal A, Mahmood M, Sidduqi M (2018) Robust spatio–temporal features for human interaction recognition via artificial neural network, IEEE conference on International Conference on Frontiers of information technology
Liu C, Yuen PC (2011) A Boosted Co–Training Algorithm for Human Action Recognition. IEEE Trans Circ Syst Video Technol 21:1203–1213
Osterland S, Weber J (2019) Analytical analysis of single–stage pressure relief valves. Int J Hydromechatron 2(1):32–53
Madabhushi A, Aggarwal J (1999) A bayesian approach to human activity recognition, IEEE Visual Surveillance. https://doi.org/10.1109/VS.1999.780265
Jalal A, Kamal S, Kim D (2014) A depth video sensor–based life–logging human activity recognition system for elderly care in smart indoor environments. Sensors 14(7):11735–11759
Jalal A, Zia-Uddin M, Kim T (2012) Depth Video–based Human Activity Recognition System Using Translation and Scaling Invariant Features for Life Logging at Smart Home, IEEE Transaction on Consumer Electronics, ISSN: 0098–3063 58(3):863–871
Mojarrad M, Dezfouli M, Rahmani A (2008) Feature’s Extraction of Human Body Composition in Images by Segmentation Method. Pwaset 35:267–270
Nadeem A, Jalal A, Kim K (2020) Human actions tracking and recognition based on body parts detection via Artificial neural network. IEEE International Conference on Advancements in computational sciences
Luvizon DC, Hedi T, David P (2017) Learning features combination for human action recognition from skeleton sequences. Pattern Recogn Lett 99:13–20
Riemenschneider H, Donoser M, Bischof H (2009) Bag of Optical Flow Volumes for Image Sequence Recognition. British Machine Vision Conference. https://doi.org/10.5244/C.23.28
Zhang B, Yang Y, Chen et al (2017) Action Recognition Using 3D Histograms of Texture and A Multi–Class Boosting Classifier. IEEE transactions on image processing 26(10). https://doi.org/10.1109/TIP.2017.2718189
Liu M, Liu H, Chen C (2017) Enhanced skeleton visualization for view invariant human action recognition. Pattern Recogn 68:346–362
Dawn DD, Shaikh SH (2016) A comprehensive survey of human action recognition with spatio–temporal interest point (STIP) detector. The Vis Comput 32:289–306
Guo Y, Yue X, Yan G (2013) Salient region detection based on multi–resolution. IEEE: International Conference on Machine learning and Cybernetics. https://doi.org/10.1109/ICMLC.2013.6890422
Ahmed A, Jalal A, Kim K (2020) A novel statistical method for scene classification based on multi–object categorization and logistic regression. Sensors
Li C, Zhang B, Chen C et al (2019) Deep Manifold Structure Transfer for Action Recognition. IEEE transactions on image processing 28(9) https://doi.org/10.1109/TIP.2019.2912357
Jalal A, Khalid N, Kim K (2020) Automatic Recognition of Human Interaction via Hybrid Descriptors and Maximum Entropy Markov Model Using Depth Sensors. Entropy
Li N, Wen L, Dong X (2015) Visual recognition by learning from web data: A weakly supervised domain generalization approach. IEEE Conf. Comput. Vis. Pattern Recognit (CVPR) https://doi.org/10.1109/CVPR.2015.7298894
Quaid M, Jalal A (2019) Wearable Sensors based Human Behavioral Pattern Recognition using Statistical Features and Reweighted Genetic Algorithm. Multimedia Tools and Applications
Badar S, Jalal A, Kim K (2020) Wearable Inertial Sensors for Daily Activity Analysis Based on Adam Optimization and the Maximum Entropy Markov Model. Entropy 22(5):1–19
Zhu C, Miao D (2019) Influence of kernel clustering on an RBFN, CAAI Transactions on Intelligence Technology 4(4)
Dargazany A, Nicolescu M (2012) Human body parts tracking using torso tracking: applications to activity recognition. Proceedings of ITNG. arXiv:1907.05281
Shokri M, Tavakoli K (2019) A review on the artificial neural network approach to analysis and prediction of seismic damage in infrastructure. Int J Hydromechatron 2(4):178–196
Susan S, Agrawal P, Mittal M et al (2019) New shape descriptor in the context of edge continuity, CAAI Transactions on Intelligence Technology 4(2)
Li G, Tang H, Sun Y et al (2019) Hand gesture recognition based on convolution neural network. Cluster Comput 22(Supplement 2): 2719–2729. https://doi.org/10.1007/s10586-017-1435-x
Bay H, Tuytelaars T, Gool LV (2006) SURF: Speeded up robust features.European Conference of Computer Vision. https://doi.org/10.1007/11744023
Jaouedi N, Boujnah N, Bouhlel MS (2019) A new hybrid deep learning model for human action recognition. Journal of King Saud University – Computer and Information Sciences. https://doi.org/10.1016/j.jksuci.2019.09.004
Zhang J, Shum H, Han J et al (2018) Action Recognition From Arbitrary Views Using Transferable Dictionary Learning. IEEE transactions on image processing 27(10). https://doi.org/10.1109/TIP.2018.2836323
Jalal A, Sharif N, Kim J et al (2013) Human activity recognition via recognized body parts of human depth silhouettes for residents monitoring services at smart homes. Indoor Built Environ 22:271–279
Milanova M, Ali S, Al-Rizzo H, Fox VL (2015) Human action Recognition: Contour–based and silhouette–based Approaches. Springer Cham. https://doi.org/10.1007/978-3-319-11430-9
Wiens T (2019) Engine speed reduction for hydraulic machinery using predictive algorithms. Int J Hydromechatron 2(1):16–31
Kim Y, Kim D Real-time dance evaluation by markerless human pose estimation. Multimed Tools Appl. https://doi.org/10.1007/s11042-018-6068-4
Manzi A, Moschetti A, Limosani R, Fiorini L, Cavallo F (2018) Enhancing Activity Recognition of Self–Localized Robot Through Depth Camera and Wearable Sensors. IEEE Sens J 18:9324–9331
Yue H, Chen W (2015) Comments on Automatic Visual Bag–of–Words for Online Robot Navigation and Mapping. IEEE Transactions on Robotics 31:223–224
Beigi H (2010) Voice: technologies and algorithms for biometrics applications Homayoon Beigi. IEEE Courses: Bioengineering
Xie C, Li C, Zhang B et al Memory Attention Networks for Skeleton-based Action Recognition. arXiv:https://arxiv.org/abs/1804.08254v2
Nguyen ND, Bui DT, Truong PH, Jeong GM (2018) Classification of Five Ambulatory Activities Regarding Stair and Incline Walking Using Smart Shoes. IEEE Sensors Journal. https://doi.org/10.1109/JSEN.2018.2837674
Li J, Li X, Tao D (2008) KPCA for Semantic Object Extraction in Images [J] Pattern Recognition 41(10):3244–3250
Xia L, Chen CC, Aggarwal JK (2012) View invariant human action recognition using histograms of 3D joints. Proceedings of CVPRW. https://doi.org/10.1109/CVPRW.2012.6239233
Wang Y, Cang S, Yu H (2018) A Data Fusion–Based Hybrid Sensory System for Older People’s Daily Activity and Daily Routine Recognition. IEEE Sens J 18:6874–6888
DAS S, Chuadhary A, Bremond F, Thonnat M (2019) Where to focus on for human action recognition?. IEEE Winter Conference on Applications of computer vision. https://doi.org/10.1109/WACV.2019.00015
Khan MUS, Abbas A, Ali M (2018) On the Correlation of Sensor Location and Human Activity Recognition in Body Area Networks (BANs). IEEE Syst J 12:82–91
Vig E, Dorr M, Cox D (2012) Space–variant descriptor sampling for action recognition based on saliency and eye movements. European Conference of Computer Vision. https://doi.org/10.1007/978--3--642--33786--4
Sadanand S, Corso JJ (2012)
10687_CR30
10687_CR31
10687_CR32
10687_CR8
10687_CR9
10687_CR37
10687_CR38
10687_CR39
10687_CR33
10687_CR34
10687_CR35
10687_CR36
10687_CR62
10687_CR63
10687_CR20
10687_CR64
10687_CR21
10687_CR65
10687_CR60
10687_CR61
10687_CR26
10687_CR27
10687_CR28
10687_CR29
10687_CR22
10687_CR23
10687_CR24
10687_CR25
10687_CR19
10687_CR4
10687_CR5
10687_CR6
10687_CR7
10687_CR1
10687_CR2
10687_CR3
10687_CR51
10687_CR52
10687_CR53
10687_CR10
10687_CR54
10687_CR50
10687_CR15
10687_CR59
10687_CR16
10687_CR17
10687_CR18
10687_CR11
10687_CR55
10687_CR12
10687_CR56
10687_CR13
10687_CR57
10687_CR14
10687_CR58
10687_CR40
10687_CR41
10687_CR42
10687_CR43
10687_CR48
10687_CR49
10687_CR44
10687_CR45
10687_CR46
10687_CR47
References_xml – reference: DAS S, Chuadhary A, Bremond F, Thonnat M (2019) Where to focus on for human action recognition?. IEEE Winter Conference on Applications of computer vision. https://doi.org/10.1109/WACV.2019.00015
– reference: Kim Y, Kim D Real-time dance evaluation by markerless human pose estimation. Multimed Tools Appl. https://doi.org/10.1007/s11042-018-6068-4
– reference: Mojarrad M, Dezfouli M, Rahmani A (2008) Feature’s Extraction of Human Body Composition in Images by Segmentation Method. Pwaset 35:267–270
– reference: Bay H, Tuytelaars T, Gool LV (2006) SURF: Speeded up robust features.European Conference of Computer Vision. https://doi.org/10.1007/11744023
– reference: Khan MUS, Abbas A, Ali M (2018) On the Correlation of Sensor Location and Human Activity Recognition in Body Area Networks (BANs). IEEE Syst J 12:82–91
– reference: Xia L, Chen CC, Aggarwal JK (2012) View invariant human action recognition using histograms of 3D joints. Proceedings of CVPRW. https://doi.org/10.1109/CVPRW.2012.6239233
– reference: Fei M, Ju Z, Zhen X, Li J (2017) Real-time Visual Tracking based on Improved Perceptual Hashing [J]. Multimed Tools Appl 76(3):4617–4634
– reference: Yue H, Chen W (2015) Comments on Automatic Visual Bag–of–Words for Online Robot Navigation and Mapping. IEEE Transactions on Robotics 31:223–224
– reference: Liu M, Liu H, Chen C (2017) Enhanced skeleton visualization for view invariant human action recognition. Pattern Recogn 68:346–362
– reference: Liu F, Xu X, Qiu S, Qing C, Tao D (2016) Simple to complex transfer learning for action recognition. IEEE Trans Image Process 25:949–960
– reference: Jalal A, Khalid N, Kim K (2020) Automatic Recognition of Human Interaction via Hybrid Descriptors and Maximum Entropy Markov Model Using Depth Sensors. Entropy
– reference: Zhu C, Miao D (2019) Influence of kernel clustering on an RBFN, CAAI Transactions on Intelligence Technology 4(4)
– reference: Liu T, Stathaki T (2016) Fast head–shoulder proposal for deformable part model based pedestrian detection. IEEE International Conference on Digital Signal Processing (DSP). https://doi.org/10.1109/ICDSP.2016.7868599
– reference: Beigi H (2010) Voice: technologies and algorithms for biometrics applications Homayoon Beigi. IEEE Courses: Bioengineering
– reference: Ahmed A, Jalal A, Kim K (2020) A novel statistical method for scene classification based on multi–object categorization and logistic regression. Sensors
– reference: Dawn DD, Shaikh SH (2016) A comprehensive survey of human action recognition with spatio–temporal interest point (STIP) detector. The Vis Comput 32:289–306
– reference: Ignatov A (2018) Real–time human activity recognition from accelerometer data using Convolutional Neural Networks. Appl Soft Comput 62:915–922
– reference: Riemenschneider H, Donoser M, Bischof H (2009) Bag of Optical Flow Volumes for Image Sequence Recognition. British Machine Vision Conference. https://doi.org/10.5244/C.23.28
– reference: Mahmood M, Jalal A, Kim K (2020) WHITE STAG Model: Wise Human Interaction Tracking and Estimation (WHITE) using Spatio–temporal and Angular–geometric (STAG) Descriptors, Multimedia Tools and Applications
– reference: Chen H, Mcgurr M (2014) Improved color and intensity patch segmentation for human full–body and body–parts detection and tracking. IEEE: Proceedings of AVSS. https://doi.org/10.1109/AVSS.2014.6918695
– reference: Hu Z, Lin X, Yan H (2006) Torso Detection in Static Images. IEEE: International Conference on Signal. https://doi.org/10.1109/ICOSP.2006.345837. Processing
– reference: Li N, Wen L, Dong X (2015) Visual recognition by learning from web data: A weakly supervised domain generalization approach. IEEE Conf. Comput. Vis. Pattern Recognit (CVPR) https://doi.org/10.1109/CVPR.2015.7298894
– reference: Susan S, Agrawal P, Mittal M et al (2019) New shape descriptor in the context of edge continuity, CAAI Transactions on Intelligence Technology 4(2)
– reference: Andriluka M, Pishchulin L, Gehler P, Schiele (2014) 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/CVPR.2014.471
– reference: Liu M, Liu H, Sun Q, Zhang T, Ding R (2016) Salient pairwise spatio–temporal interest points for real–time activity recognition. CAAI Trans Intell Technol 1:14–29
– reference: Daniel W, Remi R, Edmond B (2006) Free Viewpoint Action Recognition using Motion History Volumes. Comput Vis Image Underst (CVIU) 104:249–257
– reference: Vig E, Dorr M, Cox D (2012) Space–variant descriptor sampling for action recognition based on saliency and eye movements. European Conference of Computer Vision. https://doi.org/10.1007/978--3--642--33786--4
– reference: Li G, Tang H, Sun Y et al (2019) Hand gesture recognition based on convolution neural network. Cluster Comput 22(Supplement 2): 2719–2729. https://doi.org/10.1007/s10586-017-1435-x
– reference: Zhang B, Yang Y, Chen et al (2017) Action Recognition Using 3D Histograms of Texture and A Multi–Class Boosting Classifier. IEEE transactions on image processing 26(10). https://doi.org/10.1109/TIP.2017.2718189
– reference: Milanova M, Ali S, Al-Rizzo H, Fox VL (2015) Human action Recognition: Contour–based and silhouette–based Approaches. Springer Cham. https://doi.org/10.1007/978-3-319-11430-9
– reference: Luvizon DC, Hedi T, David P (2017) Learning features combination for human action recognition from skeleton sequences. Pattern Recogn Lett 99:13–20
– reference: Zhang J, Shum H, Han J et al (2018) Action Recognition From Arbitrary Views Using Transferable Dictionary Learning. IEEE transactions on image processing 27(10). https://doi.org/10.1109/TIP.2018.2836323
– reference: Shokri M, Tavakoli K (2019) A review on the artificial neural network approach to analysis and prediction of seismic damage in infrastructure. Int J Hydromechatron 2(4):178–196
– reference: Xie C, Li C, Zhang B et al Memory Attention Networks for Skeleton-based Action Recognition. arXiv:https://arxiv.org/abs/1804.08254v2
– reference: Jalal A, Zia-Uddin M, Kim T (2012) Depth Video–based Human Activity Recognition System Using Translation and Scaling Invariant Features for Life Logging at Smart Home, IEEE Transaction on Consumer Electronics, ISSN: 0098–3063 58(3):863–871
– reference: Al-Ghannam R, Al–Dossari H (2016) Prayer Activity Monitoring and Recognition Using Acceleration Features with Mobile Phone. Arabian J Sci Eng 41:4967–4979
– reference: Manzi A, Moschetti A, Limosani R, Fiorini L, Cavallo F (2018) Enhancing Activity Recognition of Self–Localized Robot Through Depth Camera and Wearable Sensors. IEEE Sens J 18:9324–9331
– reference: Jalal A, Nadeem A, Bobasu S (2019) Human body parts estimation and detection for physical sports movements. IEEE International Conference on Communication, Computing and Digital Systems
– reference: Jalal A, Kim Y, Kim D (2014) Ridge body parts features for human pose estimation and recognition from RGB–D video data. Proceedings of the IEEE International Conference on computing, communication and networking technologies, pp. 1–6
– reference: Li J, Li X, Tao D (2008) KPCA for Semantic Object Extraction in Images [J] Pattern Recognition 41(10):3244–3250
– reference: Madabhushi A, Aggarwal J (1999) A bayesian approach to human activity recognition, IEEE Visual Surveillance. https://doi.org/10.1109/VS.1999.780265
– reference: Jalal A, Sharif N, Kim J et al (2013) Human activity recognition via recognized body parts of human depth silhouettes for residents monitoring services at smart homes. Indoor Built Environ 22:271–279
– reference: Sadanand S, Corso JJ (2012) Action bank: A high–level representation of activity invideo. Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit, https://doi.org/10.1109/CVPR.2012.6247806
– reference: Hussain I (2019) AAMAZ Human Action Recognition Dataset, Kaggle
– reference: Dargazany A, Nicolescu M (2012) Human body parts tracking using torso tracking: applications to activity recognition. Proceedings of ITNG. arXiv:1907.05281
– reference: Jalal A, Kamal S, Kim D (2014) A depth video sensor–based life–logging human activity recognition system for elderly care in smart indoor environments. Sensors 14(7):11735–11759
– reference: Tingting Y, Junqian W, Lintai W et al (2019) Three–stage network for age estimation, CAAI Transactions on Intelligence Technology 4(2)
– reference: Li C, Zhang B, Chen C et al (2019) Deep Manifold Structure Transfer for Action Recognition. IEEE transactions on image processing 28(9) https://doi.org/10.1109/TIP.2019.2912357
– reference: Nadeem A, Jalal A, Kim K (2020) Human actions tracking and recognition based on body parts detection via Artificial neural network. IEEE International Conference on Advancements in computational sciences
– reference: Jalal A, Mahmood M, Sidduqi M (2018) Robust spatio–temporal features for human interaction recognition via artificial neural network, IEEE conference on International Conference on Frontiers of information technology
– reference: Jaouedi N, Boujnah N, Bouhlel MS (2019) A new hybrid deep learning model for human action recognition. Journal of King Saud University – Computer and Information Sciences. https://doi.org/10.1016/j.jksuci.2019.09.004
– reference: Nguyen ND, Bui DT, Truong PH, Jeong GM (2018) Classification of Five Ambulatory Activities Regarding Stair and Incline Walking Using Smart Shoes. IEEE Sensors Journal. https://doi.org/10.1109/JSEN.2018.2837674
– reference: Wang Y, Cang S, Yu H (2018) A Data Fusion–Based Hybrid Sensory System for Older People’s Daily Activity and Daily Routine Recognition. IEEE Sens J 18:6874–6888
– reference: Johnson E (2010) Clustered Pose and Non–linear Appearance Models for Human Pose Estimation. Proceedings of the British Machine Vision Conference. https://doi.org/10.5244/C.24.12
– reference: Han Y, Chung S, Ambikapathi A, Chan J, Lin W, Su S (2018) Robust human action recognition using global spatial–temporal attention for human skeleton data. Proceedings of IJCNN. DOI: https://doi.org/10.1109/IJCNN.2018.8489386
– reference: Osterland S, Weber J (2019) Analytical analysis of single–stage pressure relief valves. Int J Hydromechatron 2(1):32–53
– reference: Rezaie H, Ghassemian M (2017) An Adaptive Algorithm to Improve Energy Efficiency in Wearable Activity Recognition Systems. IEEE Sens J 17:5315–5323
– reference: Quaid M, Jalal A (2019) Wearable Sensors based Human Behavioral Pattern Recognition using Statistical Features and Reweighted Genetic Algorithm. Multimedia Tools and Applications
– reference: Badar S, Jalal A, Kim K (2020) Wearable Inertial Sensors for Daily Activity Analysis Based on Adam Optimization and the Maximum Entropy Markov Model. Entropy 22(5):1–19
– reference: San-Segundo R, Blunck H, Moreno-Pimentel J, Stisen A, Gil-Martín M (2018) Robust Human Activity Recognition using smartwatches and smartphones. Eng Appl Artif Intell 72:190–202
– reference: Shebiah RN, Sangari AA (2019) Classification of human body parts using histogram of oriented gradients. Proceedings of ICACCS. https://doi.org/10.1109/ICACCS.2019.8728328
– reference: Liu C, Yuen PC (2011) A Boosted Co–Training Algorithm for Human Action Recognition. IEEE Trans Circ Syst Video Technol 21:1203–1213
– reference: Wiens T (2019) Engine speed reduction for hydraulic machinery using predictive algorithms. Int J Hydromechatron 2(1):16–31
– reference: Sun Y et al (2020) Intelligent human computer interaction based on non redundant EMG signal. Alexandria Engineering Journal https://doi.org/10.1016/j.aej.2020.01.015
– reference: Guo Y, Yue X, Yan G (2013) Salient region detection based on multi–resolution. IEEE: International Conference on Machine learning and Cybernetics. https://doi.org/10.1109/ICMLC.2013.6890422
– ident: 10687_CR43
– ident: 10687_CR13
  doi: 10.1109/ICMLC.2013.6890422
– ident: 10687_CR6
– ident: 10687_CR14
  doi: 10.1109/IJCNN.2018.8489386
– ident: 10687_CR63
  doi: 10.1109/TIP.2017.2718189
– ident: 10687_CR15
  doi: 10.1109/ICOSP.2006.345837
– ident: 10687_CR36
  doi: 10.1109/ICDSP.2016.7868599
– ident: 10687_CR37
  doi: 10.1109/TIP.2015.2512107
– ident: 10687_CR60
  doi: 10.1109/CVPRW.2012.6239233
– ident: 10687_CR18
  doi: 10.1109/TCE.2012.6311329
– ident: 10687_CR38
  doi: 10.1016/j.patrec.2017.02.001
– ident: 10687_CR50
  doi: 10.1109/CVPR.2012.6247806
– ident: 10687_CR8
  doi: 10.1109/WACV.2019.00015
– ident: 10687_CR48
  doi: 10.1109/JSEN.2017.2720725
– ident: 10687_CR64
  doi: 10.1109/TIP.2018.2836323
– ident: 10687_CR40
  doi: 10.1007/s11042-019-08527-8
– ident: 10687_CR20
  doi: 10.3390/s140711735
– ident: 10687_CR17
  doi: 10.1016/j.asoc.2017.09.027
– ident: 10687_CR24
  doi: 10.3390/e22080817
– ident: 10687_CR54
  doi: 10.1016/j.aej.2020.01.015
– ident: 10687_CR29
  doi: 10.1016/j.patcog.2008.03.018
– ident: 10687_CR42
  doi: 10.1007/978-3-319-11430-9
– ident: 10687_CR2
  doi: 10.1007/s13369-016-2158-7
– ident: 10687_CR34
  doi: 10.1109/TCSVT.2011.2130270
– ident: 10687_CR61
– ident: 10687_CR1
  doi: 10.3390/s20143871
– ident: 10687_CR4
  doi: 10.3390/e22050579
– ident: 10687_CR21
  doi: 10.1109/ICCCNT.2014.6963015
– ident: 10687_CR5
  doi: 10.1007/11744023
– ident: 10687_CR59
  doi: 10.1504/IJHM.2019.098949
– ident: 10687_CR7
  doi: 10.1109/AVSS.2014.6918695
– ident: 10687_CR3
  doi: 10.1109/CVPR.2014.471
– ident: 10687_CR25
  doi: 10.1016/j.jksuci.2019.09.004
– ident: 10687_CR28
  doi: 10.1109/JSYST.2016.2610188
– ident: 10687_CR52
  doi: 10.1109/ICACCS.2019.8728328
– ident: 10687_CR23
  doi: 10.1109/C-CODE.2019.8680993
– ident: 10687_CR57
  doi: 10.1007/978--3--642--33786--4
– ident: 10687_CR31
  doi: 10.1007/s10586-017-1435-x
– ident: 10687_CR27
  doi: 10.1007/s11042-018-6068-4
– ident: 10687_CR9
  doi: 10.1016/j.cviu.2006.07.013
– ident: 10687_CR35
  doi: 10.1016/j.trit.2016.03.001
– ident: 10687_CR10
  doi: 10.1109/ITNG.2012.132
– ident: 10687_CR11
  doi: 10.1007/s00371-015-1066-2
– ident: 10687_CR53
  doi: 10.1504/IJHM.2019.104386
– ident: 10687_CR55
  doi: 10.1049/trit.2019.0002
– ident: 10687_CR56
  doi: 10.1049/trit.2019.0017
– ident: 10687_CR26
  doi: 10.5244/C.24.12
– ident: 10687_CR51
  doi: 10.1016/j.engappai.2018.04.002
– ident: 10687_CR30
  doi: 10.1109/CVPR.2015.7298894
– ident: 10687_CR46
  doi: 10.1504/IJHM.2019.098951
– ident: 10687_CR16
– ident: 10687_CR33
  doi: 10.1016/j.patcog.2017.02.030
– ident: 10687_CR47
  doi: 10.1007/s11042-019-08463-7
– ident: 10687_CR19
  doi: 10.1177/1420326X12469714
– ident: 10687_CR58
  doi: 10.1109/JSEN.2018.2833745
– ident: 10687_CR22
  doi: 10.1109/FIT.2018.00045
– ident: 10687_CR49
  doi: 10.5244/C.23.28
– ident: 10687_CR62
  doi: 10.1109/TRO.2014.2378451
– ident: 10687_CR12
  doi: 10.1007/s11042-016-3723-5
– ident: 10687_CR41
  doi: 10.1109/JSEN.2018.2869807
– ident: 10687_CR65
  doi: 10.1049/trit.2019.0036
– ident: 10687_CR39
  doi: 10.1109/VS.1999.780265
– ident: 10687_CR32
  doi: 10.1109/TIP.2019.2912357
– ident: 10687_CR44
  doi: 10.1109/ICACS47775.2020.9055951
– ident: 10687_CR45
  doi: 10.1109/JSEN.2018.2837674
SSID ssj0016524
Score 2.5314066
Snippet Automated human posture estimation (A-HPE) systems need delicate methods for detecting body parts and selecting cues based on marker-less sensors to...
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 21465
SubjectTerms Activity recognition
Body parts
Computer Communication Networks
Computer Science
Data Structures and Information Theory
Discriminant analysis
Emission analysis
Entropy
Fitness
Markov chains
Maximum entropy
Multimedia Information Systems
Optical flow (image analysis)
Robustness
Sensors
Service robots
Special Purpose and Application-Based Systems
Statistical analysis
Statistical methods
Virtual reality
SummonAdditionalLinks – databaseName: SpringerLink Journals (ICM)
  dbid: U2A
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwELagLDDwKCDKSzewQaTEtfMYK0RVIcFEpW6RHccLkERNilTx5zm7TgMIkJjteMj5Xr677yPkSolQ-YnMPR4q6jF0kJ4cRrmHno-FsfZZlpuK7sNjOJmy-xmfuaGwuu12b0uS1lJ3w26BGSUxLQWYxqBq8E2yxTF3N41cUzpa1w5C7qhsY99Dfxi4UZmfz_jqjroY81tZ1Hqb8T7ZdWEijFZyPSAbedEney0FAziN7JOdT3iCh-R9tGhKi8EKlnsPqrI2FQIwUBqrGUXAIBVsKgtmpMEwR8C6iQiXzbsszEu5qBuQpVpChXerBpU3tmerAFEoMC_CZbWEVzF_Lt_Asukcken47ul24jl2BS9DtWs8Krj2qQ4zNswjpWKpWaQjbbQG_SZV1FdJzFQQKEmZNmlQqCXXSaSFphINxTHpFWWRnxCIEik4CwyaO2fUl0JkwyRkksYxxxggGpCg_clp5qDHDQPGS9qBJhvBpCiY1Aom5QNyvf6mWgFv_Ln7vJVd6pSwTinalxgDxIQOyE0rz27599NO_7f9jGxTc6Xs28w56TXzRX6BoUojL-3N_ADSleBk
  priority: 102
  providerName: Springer Nature
Title Automatic human posture estimation for sport activity recognition with robust body parts detection and entropy markov model
URI https://link.springer.com/article/10.1007/s11042-021-10687-5
https://www.proquest.com/docview/2531845692
Volume 80
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1Lj9MwEB6x7QUOPBYQXZZqDtzAInHtPE6oRe2uQFQIUWk5RXEcX4AkNOlKK_48M6nTAhJ7ymESHzJPz-sDeGnzyAapKYWOrBSKHKQws7gU5PlUlLhAFSVXdD-uo8uNen-lr3zCrfVtlYNN7A21rQvOkb-RJCwJeftUvm1-CkaN4uqqh9A4gTGZ4CQZwXixXH_6fKgjRNrD2iaBIN8Y-rGZ_fBcyKMp3KJA1yJSNf23azrGm_-USHvPs3oI933IiPM9jx_BnbI6hQcDHAN67TyFe3_sFnwMv-a7ru73sWKPw4dN3XK1AHmtxn5eESlgxf5aizzewCgSeGgoIjLnaHFbm13boantDTYkZy3asuv7tyrMK4ucHa6bG_yRb7_V19gj6zyBzWr55d2l8EgLoiAV7ITMtQukiwo1K2NrE-NU7GLHGkQ-VFoZ2DRRNgytkcrxlShyRrs0drmThozGUxhVdVU-A4xTk2sV8mZ3rWRg8ryYpZEyxB1N8UA8gXD4yVnh15AzGsb37LhAmRmTEWOynjGZnsCrwzfNfgnHrW-fD7zLvEK22VF8JvB64OeR_P_Tzm4_7TnclSxCfV7mHEbddle-oDClM1M4SVYXUxjPV4vFmp8XXz8sp15CibqR899gxunS
linkProvider ProQuest
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9QwEB6VcoAeeBRQFwrMAU5gkXjtPA4IVcCypY9TK_UW4ji-AEnYZEEr_hO_kRkn2QUkeus5iQ-Zb_yNZzzzATyzeWSD1JRCR1YKRQQpzDQuBTGfihIXqKLkiu7JaTQ_Vx8v9MUW_Bp7Yfha5bgn-o3a1gXnyF9JAktCbJ_KN803wapRXF0dJTR6WByVqx90ZGtfH74j-z6Xcvb-7O1cDKoCoiC4dULm2gXSRYWalrG1iXEqdrFjtBBfSCsDmybKhqE1UjkO_yNntEtjlztpyEFo3WtwXU2JybkzffZhXbWI9CCimwSCmDgcmnT6Vr2QG2H4QgQdwsix9d9EuIlu_ynIep6b3YFbQ4CKBz2i7sJWWe3C7VH8AYe9YBd2_phkeA9-Hiy72k9_Ra_6h03dcm0CeYhH3x2JFB6jP0QjN1OwZgWury_RY84I46I2y7ZDU9sVNoTqFm3Z-dtiFeaVRc5F180Kv-aLz_V39Do-9-H8SizwALaruir3AOPU5FqFPEdeKxmYPC-maaSMTBJN0Uc8gXD8yVkxDD1n7Y0v2WZcMxsmI8Nk3jCZnsCL9TdNP_Lj0rf3R9tlg_u32QasE3g52nPz-P-rPbx8tadwY352cpwdH54ePYKbkuHkM0L7sN0tluVjCpA688SjEuHTVbvBb5vQIWo
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1Lb9NAEB6VVEJw4FFADRSYA5xgVXuz68ehQoU2ailEFaJSb8br9V4otokdUMQ_49d1xo8EkOitZ9t78Hzz2nl8AC9sGlgvNrnQgZVCkYMUZhLmgjyfCiLnqSzniu7HWXB0pt6f6_MN-D3MwnBb5WATW0Nty4zvyHclgSUibx_LXde3RZweTN9U3wUzSHGldaDT6CByki9_UvpW7x0fkKxfSjk9_PzuSPQMAyIj6DVCptp50gWZmuShtZFxKnShY-SQ75BWejaOlPV9a6RynAoEzmgXhy510pCy0Lk3YDPkrGgEm28PZ6efVjWMQPeUupEnyC_7_chON7jn81gMt0dQSkZqrv92i-tY95_ybOv1pvfgTh-u4n6Hr_uwkRdbcHeggsDeMmzB7T_2Gj6AX_uLpmx3wWLLAYhVWXOlAnmlRzcriRQsY5tSI49WMIMFrpqZ6DHfD-O8NIu6QVPaJVaE8Rpt3rS9YwWmhUW-mS6rJX5L51_LH9iy-jyEs2uRwSMYFWWRbwOGsUm18nmrvFbSM2maTeJAGRlFmmKRcAz-8JOTrF-BzkwcF8l6eTMLJiHBJK1gEj2GV6tvqm4ByJVv7wyyS3pjUCdr6I7h9SDP9eP_n_b46tOew01SgeTD8ezkCdySjKb2emgHRs18kT-laKkxz3pYIny5bk24BOTBJvw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Automatic+human+posture+estimation+for+sport+activity+recognition+with+robust+body+parts+detection+and+entropy+markov+model&rft.jtitle=Multimedia+tools+and+applications&rft.au=Nadeem+Amir&rft.au=Ahmad%2C+Jalal&rft.au=Kim+Kibum&rft.date=2021-06-01&rft.pub=Springer+Nature+B.V&rft.issn=1380-7501&rft.eissn=1573-7721&rft.volume=80&rft.issue=14&rft.spage=21465&rft.epage=21498&rft_id=info:doi/10.1007%2Fs11042-021-10687-5&rft.externalDBID=HAS_PDF_LINK
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1380-7501&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1380-7501&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1380-7501&client=summon