Boosting Monocular 3D Human Pose Estimation With Part Aware Attention

Monocular 3D human pose estimation is challenging due to depth ambiguity. Convolution-based and Graph-Convolution-based methods have been developed to extract 3D information from temporal cues in motion videos. Typically, in the lifting-based methods, most recent works adopt the transformer to model...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on image processing Vol. 31; pp. 4278 - 4291
Main Authors Xue, Youze, Chen, Jiansheng, Gu, Xiangming, Ma, Huimin, Ma, Hongbing
Format Journal Article
LanguageEnglish
Published United States IEEE 01.01.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1057-7149
1941-0042
1941-0042
DOI10.1109/TIP.2022.3182269

Cover

Loading…
Abstract Monocular 3D human pose estimation is challenging due to depth ambiguity. Convolution-based and Graph-Convolution-based methods have been developed to extract 3D information from temporal cues in motion videos. Typically, in the lifting-based methods, most recent works adopt the transformer to model the temporal relationship of 2D keypoint sequences. These previous works usually consider all the joints of a skeleton as a whole and then calculate the temporal attention based on the overall characteristics of the skeleton. Nevertheless, the human skeleton exhibits obvious part-wise inconsistency of motion patterns. It is therefore more appropriate to consider each part's temporal behaviors separately. To deal with such part-wise motion inconsistency, we propose the Part Aware Temporal Attention module to extract the temporal dependency of each part separately. Moreover, the conventional attention mechanism in 3D pose estimation usually calculates attention within a short time interval. This indicates that only the correlation within the temporal context is considered. Whereas, we find that the part-wise structure of the human skeleton is repeating across different periods, actions, and even subjects. Therefore, the part-wise correlation at a distance can be utilized to further boost 3D pose estimation. We thus propose the Part Aware Dictionary Attention module to calculate the attention for the part-wise features of input in a dictionary, which contains multiple 3D skeletons sampled from the training set. Extensive experimental results show that our proposed part aware attention mechanism helps a transformer-based model to achieve state-of-the-art 3D pose estimation performance on two widely used public datasets. The codes and the trained models are released at https://github.com/thuxyz19/3D-HPE-PAA .
AbstractList Monocular 3D human pose estimation is challenging due to depth ambiguity. Convolution-based and Graph-Convolution-based methods have been developed to extract 3D information from temporal cues in motion videos. Typically, in the lifting-based methods, most recent works adopt the transformer to model the temporal relationship of 2D keypoint sequences. These previous works usually consider all the joints of a skeleton as a whole and then calculate the temporal attention based on the overall characteristics of the skeleton. Nevertheless, the human skeleton exhibits obvious part-wise inconsistency of motion patterns. It is therefore more appropriate to consider each part's temporal behaviors separately. To deal with such part-wise motion inconsistency, we propose the Part Aware Temporal Attention module to extract the temporal dependency of each part separately. Moreover, the conventional attention mechanism in 3D pose estimation usually calculates attention within a short time interval. This indicates that only the correlation within the temporal context is considered. Whereas, we find that the part-wise structure of the human skeleton is repeating across different periods, actions, and even subjects. Therefore, the part-wise correlation at a distance can be utilized to further boost 3D pose estimation. We thus propose the Part Aware Dictionary Attention module to calculate the attention for the part-wise features of input in a dictionary, which contains multiple 3D skeletons sampled from the training set. Extensive experimental results show that our proposed part aware attention mechanism helps a transformer-based model to achieve state-of-the-art 3D pose estimation performance on two widely used public datasets.
Monocular 3D human pose estimation is challenging due to depth ambiguity. Convolution-based and Graph-Convolution-based methods have been developed to extract 3D information from temporal cues in motion videos. Typically, in the lifting-based methods, most recent works adopt the transformer to model the temporal relationship of 2D keypoint sequences. These previous works usually consider all the joints of a skeleton as a whole and then calculate the temporal attention based on the overall characteristics of the skeleton. Nevertheless, the human skeleton exhibits obvious part-wise inconsistency of motion patterns. It is therefore more appropriate to consider each part's temporal behaviors separately. To deal with such part-wise motion inconsistency, we propose the Part Aware Temporal Attention module to extract the temporal dependency of each part separately. Moreover, the conventional attention mechanism in 3D pose estimation usually calculates attention within a short time interval. This indicates that only the correlation within the temporal context is considered. Whereas, we find that the part-wise structure of the human skeleton is repeating across different periods, actions, and even subjects. Therefore, the part-wise correlation at a distance can be utilized to further boost 3D pose estimation. We thus propose the Part Aware Dictionary Attention module to calculate the attention for the part-wise features of input in a dictionary, which contains multiple 3D skeletons sampled from the training set. Extensive experimental results show that our proposed part aware attention mechanism helps a transformer-based model to achieve state-of-the-art 3D pose estimation performance on two widely used public datasets. The codes and the trained models are released at https://github.com/thuxyz19/3D-HPE-PAA .
Monocular 3D human pose estimation is challenging due to depth ambiguity. Convolution-based and Graph-Convolution-based methods have been developed to extract 3D information from temporal cues in motion videos. Typically, in the lifting-based methods, most recent works adopt the transformer to model the temporal relationship of 2D keypoint sequences. These previous works usually consider all the joints of a skeleton as a whole and then calculate the temporal attention based on the overall characteristics of the skeleton. Nevertheless, the human skeleton exhibits obvious part-wise inconsistency of motion patterns. It is therefore more appropriate to consider each part's temporal behaviors separately. To deal with such part-wise motion inconsistency, we propose the Part Aware Temporal Attention module to extract the temporal dependency of each part separately. Moreover, the conventional attention mechanism in 3D pose estimation usually calculates attention within a short time interval. This indicates that only the correlation within the temporal context is considered. Whereas, we find that the part-wise structure of the human skeleton is repeating across different periods, actions, and even subjects. Therefore, the part-wise correlation at a distance can be utilized to further boost 3D pose estimation. We thus propose the Part Aware Dictionary Attention module to calculate the attention for the part-wise features of input in a dictionary, which contains multiple 3D skeletons sampled from the training set. Extensive experimental results show that our proposed part aware attention mechanism helps a transformer-based model to achieve state-of-the-art 3D pose estimation performance on two widely used public datasets. The codes and the trained models are released at https://github.com/thuxyz19/3D-HPE-PAA.Monocular 3D human pose estimation is challenging due to depth ambiguity. Convolution-based and Graph-Convolution-based methods have been developed to extract 3D information from temporal cues in motion videos. Typically, in the lifting-based methods, most recent works adopt the transformer to model the temporal relationship of 2D keypoint sequences. These previous works usually consider all the joints of a skeleton as a whole and then calculate the temporal attention based on the overall characteristics of the skeleton. Nevertheless, the human skeleton exhibits obvious part-wise inconsistency of motion patterns. It is therefore more appropriate to consider each part's temporal behaviors separately. To deal with such part-wise motion inconsistency, we propose the Part Aware Temporal Attention module to extract the temporal dependency of each part separately. Moreover, the conventional attention mechanism in 3D pose estimation usually calculates attention within a short time interval. This indicates that only the correlation within the temporal context is considered. Whereas, we find that the part-wise structure of the human skeleton is repeating across different periods, actions, and even subjects. Therefore, the part-wise correlation at a distance can be utilized to further boost 3D pose estimation. We thus propose the Part Aware Dictionary Attention module to calculate the attention for the part-wise features of input in a dictionary, which contains multiple 3D skeletons sampled from the training set. Extensive experimental results show that our proposed part aware attention mechanism helps a transformer-based model to achieve state-of-the-art 3D pose estimation performance on two widely used public datasets. The codes and the trained models are released at https://github.com/thuxyz19/3D-HPE-PAA.
Author Ma, Huimin
Gu, Xiangming
Ma, Hongbing
Xue, Youze
Chen, Jiansheng
Author_xml – sequence: 1
  givenname: Youze
  orcidid: 0000-0002-7054-5204
  surname: Xue
  fullname: Xue, Youze
  email: xueyz19@mails.tsinghua.edu.cn
  organization: Department of Electronic Engineering, Tsinghua University, Beijing, China
– sequence: 2
  givenname: Jiansheng
  orcidid: 0000-0002-2040-7938
  surname: Chen
  fullname: Chen, Jiansheng
  email: jschen@ustb.edu.cn
  organization: School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
– sequence: 3
  givenname: Xiangming
  orcidid: 0000-0003-0637-8664
  surname: Gu
  fullname: Gu, Xiangming
  email: xiangming@comp.nus.edu.sg
  organization: School of Computing, National University of Singapore, Singapore
– sequence: 4
  givenname: Huimin
  orcidid: 0000-0001-5383-5667
  surname: Ma
  fullname: Ma, Huimin
  email: mhmpub@ustb.edu.cn
  organization: School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
– sequence: 5
  givenname: Hongbing
  surname: Ma
  fullname: Ma, Hongbing
  email: hbma@mail.tsinghua.edu.cn
  organization: Department of Electronic Engineering, Tsinghua University, Beijing, China
BackLink https://www.ncbi.nlm.nih.gov/pubmed/35709111$$D View this record in MEDLINE/PubMed
BookMark eNp9kc1PHDEMxaOKqny0dyQkFKmXXmZrJ5PJ5LiFbUGiYg9UPUZhxguDZhNIMqr635Nllx449GRL_j3Lfu-Q7fngibFjhBkimK83l8uZACFmElshGvOOHaCpsQKoxV7pQelKY2322WFKDwBYK2w-sH2pNBhEPGCLbyGkPPg7_jP40E2ji1ye84tp7TxfhkR8UcZrl4fg-e8h3_Oli5nP_7hIfJ4z-c3kI3u_cmOiT7t6xH59X9ycXVRX1z8uz-ZXVSdrnSutJfYtArpb1YE0LTWqlso5Wmns-7pfkejLkWTqxnSSQEKrlRLQO1JSKnnEvmz3PsbwNFHKdj2kjsbReQpTsqLRWmmhjC7o5zfoQ5iiL9cVqkWjZCubQp3uqOl2Tb19jOXX-Ne-GlSAZgt0MaQUaWW7Ib-4kaMbRotgN0nYkoTdJGF3SRQhvBG-7v6P5GQrGYjoH260abUG-QwFIo_v
CODEN IIPRE4
CitedBy_id crossref_primary_10_1016_j_dsp_2024_104764
crossref_primary_10_1016_j_jvcir_2024_104247
crossref_primary_10_1016_j_patcog_2025_111562
crossref_primary_10_1109_ACCESS_2023_3307138
crossref_primary_10_1109_TIP_2024_3490401
crossref_primary_10_1007_s11042_024_20179_x
crossref_primary_10_1016_j_cviu_2024_103992
crossref_primary_10_1007_s00530_024_01451_4
crossref_primary_10_1109_TIP_2023_3338410
crossref_primary_10_1109_TCSVT_2023_3318557
crossref_primary_10_1109_TMM_2023_3347095
crossref_primary_10_1007_s11042_024_20495_2
crossref_primary_10_1016_j_neucom_2024_128049
crossref_primary_10_1109_TIP_2024_3515872
Cites_doi 10.1007/978-3-030-01240-3_41
10.1109/ICCV.2017.288
10.1007/s41095-021-0229-5
10.1109/CVPR.2019.00794
10.1109/CVPR46437.2021.00847
10.1109/CVPR46437.2021.01309
10.1007/978-3-030-01249-6_5
10.1109/CVPR42600.2020.00780
10.1109/TIP.2013.2271850
10.1109/CVPR.2018.00763
10.1109/TPAMI.2020.2983686
10.1109/ICCV48922.2021.00986
10.1007/978-3-030-58568-6_30
10.1109/ICCV.2017.256
10.1109/CVPR46437.2021.01584
10.1109/CVPR42600.2020.00621
10.1109/TIP.2018.2812100
10.1007/978-3-030-58601-0_45
10.1007/978-3-030-58571-6_44
10.1109/ICCV.2019.00236
10.1109/CVPR.2018.00742
10.1109/CVPR46437.2021.00617
10.1109/CVPR46437.2021.00199
10.1007/978-3-030-58452-8_13
10.1109/ICCV48922.2021.01101
10.1109/TPAMI.2013.248
10.1109/ICCV.2019.00781
10.1109/ICCV.2017.284
10.1109/ICCV.2011.6126500
10.1109/TIP.2018.2837386
10.1109/3DV.2017.00064
10.1609/aaai.v32i1.12328
10.1109/CVPR42600.2020.00511
10.1109/TCSVT.2021.3057267
10.1109/ICCV48922.2021.01145
10.1109/TIP.2021.3070439
10.1109/CVPR.2017.143
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
DOI 10.1109/TIP.2022.3182269
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
MEDLINE - Academic
DatabaseTitleList PubMed

MEDLINE - Academic
Technology Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Applied Sciences
Engineering
EISSN 1941-0042
EndPage 4291
ExternalDocumentID 35709111
10_1109_TIP_2022_3182269
9798770
Genre orig-research
Journal Article
GrantInformation_xml – fundername: Beijing Science and Technology Planning Project
  grantid: Z191100007419001
  funderid: 10.13039/501100012401
– fundername: National Natural Science Foundation of China
  grantid: U20B2062; 61673234
  funderid: 10.13039/501100001809
GroupedDBID ---
-~X
.DC
0R~
29I
4.4
53G
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABFSI
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
E.L
EBS
EJD
F5P
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
MS~
O9-
OCL
P2P
RIA
RIE
RNS
TAE
TN5
VH1
AAYOK
AAYXX
CITATION
RIG
NPM
Z5M
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
7X8
ID FETCH-LOGICAL-c347t-7731d8101ab5c0398e65435aaef71dd4dfe2d014e9469c3e030875520dae53353
IEDL.DBID RIE
ISSN 1057-7149
1941-0042
IngestDate Fri Jul 11 10:50:47 EDT 2025
Mon Jun 30 10:10:15 EDT 2025
Wed Feb 19 02:25:22 EST 2025
Thu Apr 24 22:58:13 EDT 2025
Tue Jul 01 02:03:28 EDT 2025
Wed Aug 27 02:23:55 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c347t-7731d8101ab5c0398e65435aaef71dd4dfe2d014e9469c3e030875520dae53353
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0002-7054-5204
0000-0002-2040-7938
0000-0003-0637-8664
0000-0001-5383-5667
PMID 35709111
PQID 2681953836
PQPubID 85429
PageCount 14
ParticipantIDs pubmed_primary_35709111
crossref_citationtrail_10_1109_TIP_2022_3182269
ieee_primary_9798770
proquest_miscellaneous_2677572597
proquest_journals_2681953836
crossref_primary_10_1109_TIP_2022_3182269
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-01-01
PublicationDateYYYYMMDD 2022-01-01
PublicationDate_xml – month: 01
  year: 2022
  text: 2022-01-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: New York
PublicationTitle IEEE transactions on image processing
PublicationTitleAbbrev TIP
PublicationTitleAlternate IEEE Trans Image Process
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref14
ref36
ref31
ref30
ref11
ref10
ref32
ref2
ref1
ref17
ref39
ref16
ref38
ref19
ref18
Hendrycks (ref40) 2016
Loshchilov (ref44) 2017
Dosovitskiy (ref34) 2020
Touvron (ref35)
Zhu (ref37) 2020
ref24
ref46
ref23
ref26
ref25
ref20
ref42
ref41
ref22
ref21
ref43
ref28
ref27
Li (ref15) 2021
ref29
ref8
ref7
Lin (ref45) 2019
ref9
ref4
ref3
(ref6) 2021
ref5
Vaswani (ref33)
References_xml – ident: ref20
  doi: 10.1007/978-3-030-01240-3_41
– ident: ref29
  doi: 10.1109/ICCV.2017.288
– ident: ref41
  doi: 10.1007/s41095-021-0229-5
– ident: ref31
  doi: 10.1109/CVPR.2019.00794
– year: 2016
  ident: ref40
  article-title: Gaussian error linear units (GELUs)
  publication-title: arXiv:1606.08415
– ident: ref24
  doi: 10.1109/CVPR46437.2021.00847
– ident: ref43
  doi: 10.1109/CVPR46437.2021.01309
– year: 2019
  ident: ref45
  article-title: Trajectory space factorization for deep video-based 3D human pose estimation
  publication-title: arXiv:1908.08289
– start-page: 10347
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref35
  article-title: Training data-efficient image transformers & distillation through attention
– year: 2021
  ident: ref15
  article-title: Exploiting temporal contexts with strided transformer for 3D human pose estimation
  publication-title: arXiv:2103.14304
– ident: ref30
  doi: 10.1007/978-3-030-01249-6_5
– ident: ref5
  doi: 10.1109/CVPR42600.2020.00780
– ident: ref8
  doi: 10.1109/TIP.2013.2271850
– ident: ref22
  doi: 10.1109/CVPR.2018.00763
– year: 2017
  ident: ref44
  article-title: Decoupled weight decay regularization
  publication-title: arXiv:1711.05101
– ident: ref27
  doi: 10.1109/TPAMI.2020.2983686
– ident: ref39
  doi: 10.1109/ICCV48922.2021.00986
– ident: ref14
  doi: 10.1007/978-3-030-58568-6_30
– ident: ref28
  doi: 10.1109/ICCV.2017.256
– ident: ref25
  doi: 10.1109/CVPR46437.2021.01584
– ident: ref46
  doi: 10.1109/CVPR42600.2020.00621
– ident: ref1
  doi: 10.1109/TIP.2018.2812100
– ident: ref11
  doi: 10.1007/978-3-030-58601-0_45
– ident: ref21
  doi: 10.1007/978-3-030-58571-6_44
– ident: ref32
  doi: 10.1109/ICCV.2019.00236
– ident: ref19
  doi: 10.1109/CVPR.2018.00742
– ident: ref7
  doi: 10.1109/CVPR46437.2021.00617
– ident: ref38
  doi: 10.1109/CVPR46437.2021.00199
– ident: ref36
  doi: 10.1007/978-3-030-58452-8_13
– year: 2020
  ident: ref37
  article-title: Deformable DETR: Deformable transformers for end-to-end object detection
  publication-title: arXiv:2010.04159
– volume-title: Vicon
  year: 2021
  ident: ref6
– ident: ref42
  doi: 10.1109/ICCV48922.2021.01101
– ident: ref16
  doi: 10.1109/TPAMI.2013.248
– ident: ref4
  doi: 10.1109/ICCV.2019.00781
– ident: ref23
  doi: 10.1109/ICCV.2017.284
– start-page: 5998
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref33
  article-title: Attention is all you need
– ident: ref17
  doi: 10.1109/ICCV.2011.6126500
– ident: ref3
  doi: 10.1109/TIP.2018.2837386
– ident: ref18
  doi: 10.1109/3DV.2017.00064
– ident: ref12
  doi: 10.1609/aaai.v32i1.12328
– year: 2020
  ident: ref34
  article-title: An image is worth $16\times16$ words: Transformers for image recognition at scale
  publication-title: arXiv:2010.11929
– ident: ref13
  doi: 10.1109/CVPR42600.2020.00511
– ident: ref10
  doi: 10.1109/TCSVT.2021.3057267
– ident: ref9
  doi: 10.1109/ICCV48922.2021.01145
– ident: ref2
  doi: 10.1109/TIP.2021.3070439
– ident: ref26
  doi: 10.1109/CVPR.2017.143
SSID ssj0014516
Score 2.4992437
Snippet Monocular 3D human pose estimation is challenging due to depth ambiguity. Convolution-based and Graph-Convolution-based methods have been developed to extract...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 4278
SubjectTerms 3D human pose estimation
Convolution
Correlation
Dictionaries
dictionary attention
Human motion
Modules
part aware attention
Pose estimation
Skeleton
temporal attention
Three-dimensional displays
Transformers
Title Boosting Monocular 3D Human Pose Estimation With Part Aware Attention
URI https://ieeexplore.ieee.org/document/9798770
https://www.ncbi.nlm.nih.gov/pubmed/35709111
https://www.proquest.com/docview/2681953836
https://www.proquest.com/docview/2677572597
Volume 31
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lj9MwEB6VnuBAl5ZHdsvKSFyQSJvGcRwfu0urglTUQyt6ixxnItCuGrRNtRK_nrHzECBAe4tk5-VvRv4-z3gM8DYpJNEMpf1ilhk_ylTsJ1qHfhwrE82SBGNjI7rrz_FqF33ai30P3nd7YRDRJZ_hxF66WH5empNdKpsqSQpZkkB_RMKt3qvVRQzsgbMusimkL4n2tyHJQE23HzckBMOQ9ClNh7EtFMqFDKyb_zYbueNV_s003YyzHMC6_dY60eRmcqqyifnxRxnHh_7MGTxtqCeb17byDHp4GMKgoaGscfLjEJ78UqNwBIursjza3GhG7l-6rFXGPzC3-M825RHZgprrDZDsy7fqK9uQNbL5vb5DNq-qOp3yOeyWi-31ym_OXvANj2RFpJvPclv8S2fCBFwRaIKYldZI4OZ5lBcY5jTqqEhfG46usqAQYZBrJAYp-AvoH8oDvgI24xlhnkmuUUcFJkoWihOPEAVPgggTD6YtBqlpCpPb8zFuUydQApUSgKkFMG0A9OBdd8f3uijHf_qO7Nh3_Zph92Dcwpw2XntMw9hFFRMee_CmayZ_s0EUfcDyZPtIKSSJRunBy9o8ume3VnX-93dewGP7ZfUCzhj61d0JXxOlqbJLZ8s_ATGd7PU
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fT9swED4heNj2AAw2CJTNk_YyaWnTOI7jxwKtykZRH4rGW-Q4F20CNYimQuKv5-z80DZt094i2Ukcf3fxd77zHcDHpJBEM5T2i2Fm_ChTsZ9oHfpxrEw0TBKMjfXozq7i6XX05UbcbMDn7iwMIrrgM-zbS-fLz0uztltlAyXJQpZkoG_Ruh-p-rRW5zOwJWedb1NIXxLxb52SgRosLuZkCoYhWai0IMY2VSgXMrCK_st65Aqs_J1rujVnsgOzdrR1qMltf11lffP0WyLH__2cXdhuyCcb1dLyGjZwuQc7DRFljZqv9uDVT1kK92F8WpYrGx3N6AdQurhVxs-Z2_5n83KFbEzN9RFI9u1H9Z3NSR7Z6FE_IBtVVR1Q-QauJ-PF2dRvqi_4hkeyItrNh7lN_6UzYQKuCDZB3EprJHjzPMoLDHOadVRkYRuOLregEGGQayQOKfhb2FyWSzwENuQZoZ5JrlFHBSZKFooTkxAFT4IIEw8GLQapaVKT2woZd6kzUQKVEoCpBTBtAPTgU3fHfZ2W4x999-3cd_2aafeg18KcNnq7SsPY-RUTHnvwoWsmjbNuFL3Ecm37SCkkmY3Sg4NaPLpnt1J19Od3vocX08XsMr28uPp6DC_tKOvtnB5sVg9rPCGCU2XvnFw_A80h8EU
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Boosting+Monocular+3D+Human+Pose+Estimation+With+Part+Aware+Attention&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Xue%2C+Youze&rft.au=Chen%2C+Jiansheng&rft.au=Gu%2C+Xiangming&rft.au=Ma%2C+Huimin&rft.date=2022-01-01&rft.issn=1941-0042&rft.eissn=1941-0042&rft.volume=31&rft.spage=4278&rft_id=info:doi/10.1109%2FTIP.2022.3182269&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon