Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs

We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under‐constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video...

Full description

Saved in:
Bibliographic Details
Published inComputer graphics forum Vol. 36; no. 2; pp. 349 - 360
Main Authors von Marcard, T., Rosenhahn, B., Black, M. J., Pons‐Moll, G.
Format Journal Article
LanguageEnglish
Published Oxford Blackwell Publishing Ltd 01.05.2017
Subjects
Online AccessGet full text

Cover

Loading…
Abstract We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under‐constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables motion capture using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.
AbstractList We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under‐constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables motion capture using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.
Author Black, M. J.
Rosenhahn, B.
Pons‐Moll, G.
von Marcard, T.
Author_xml – sequence: 1
  givenname: T.
  surname: von Marcard
  fullname: von Marcard, T.
  organization: Leibniz‐Universität Hannover
– sequence: 2
  givenname: B.
  surname: Rosenhahn
  fullname: Rosenhahn, B.
  organization: Leibniz‐Universität Hannover
– sequence: 3
  givenname: M. J.
  surname: Black
  fullname: Black, M. J.
– sequence: 4
  givenname: G.
  surname: Pons‐Moll
  fullname: Pons‐Moll, G.
BookMark eNp9UEtLAzEQDlLBdvXgPwh48rBtYl5bb6X2JRUF7Tmkm0S2bDc1ySL996atXgSdOcww833z-Hqg07jGAHCNUR8nG5Tvto9J8jPQxZSLvOBs2AFdhFMuEGMXoBfCBiFEBWdd8Pi6Uz4YuGiMj5Wq4YsLxt_DURvdVsWqhOQBztutao4dOAmxOtRdA613W_hDf1qFS3BuVR3M1XfMwGo6eRvP8-XzbDEeLfOSEoJzwtZ3jFIqNKcCre1QlSlhpdDaaF6g4tAnBUNrhpmllBNNkNWCUl0QXSiSgZvT3J13H60JUW5c65u0UuIhwhRxlkgZuD2hSu9C8MbKnU-X-73ESB6kkkkqeZQqYQe_sGUVj09Gr6r6P8ZnVZv936PleDY9Mb4AWup5ew
CitedBy_id crossref_primary_10_1016_j_wneu_2023_05_020
crossref_primary_10_3390_s17112589
crossref_primary_10_1109_TMECH_2024_3361466
crossref_primary_10_1007_s11263_021_01488_2
crossref_primary_10_3390_s22062300
crossref_primary_10_1016_j_jbiomech_2020_110086
crossref_primary_10_12677_csa_2024_143061
crossref_primary_10_1109_TNET_2018_2859246
crossref_primary_10_1145_3306346_3322961
crossref_primary_10_1109_ACCESS_2021_3062545
crossref_primary_10_15701_kcgs_2022_28_4_1
crossref_primary_10_1016_j_iot_2024_101465
crossref_primary_10_3389_fbioe_2025_1507162
crossref_primary_10_1145_3306346_3322957
crossref_primary_10_1016_j_measurement_2024_115682
crossref_primary_10_1080_10447318_2024_2307689
crossref_primary_10_1109_LRA_2023_3300279
crossref_primary_10_1109_JSAC_2024_3414604
crossref_primary_10_3390_s22197144
crossref_primary_10_1038_s41467_024_46662_5
crossref_primary_10_1109_LRA_2019_2896705
crossref_primary_10_1016_j_ifacol_2020_12_393
crossref_primary_10_1186_s12984_019_0550_8
crossref_primary_10_1145_3272127_3275022
crossref_primary_10_3724_SP_J_1089_2022_19194
crossref_primary_10_1007_s00371_018_1526_6
crossref_primary_10_2478_amns_2023_2_00262
crossref_primary_10_7736_JKSPE_021_084
crossref_primary_10_3390_s20216330
crossref_primary_10_1109_TPAMI_2024_3457229
crossref_primary_10_1016_j_bspc_2024_106432
crossref_primary_10_1145_3610891
crossref_primary_10_1115_1_4048572
crossref_primary_10_1109_ACCESS_2022_3210526
crossref_primary_10_1016_j_cag_2017_11_008
crossref_primary_10_1145_3450626_3459825
crossref_primary_10_1123_jmpb_2020_0012
crossref_primary_10_1145_3311972
crossref_primary_10_1109_JSEN_2024_3378756
crossref_primary_10_1109_TCSVT_2024_3423411
crossref_primary_10_1109_LRA_2024_3466070
crossref_primary_10_1016_j_engappai_2023_106639
crossref_primary_10_1007_s11263_024_02042_6
crossref_primary_10_1109_TMM_2022_3180218
crossref_primary_10_1299_transjsme_21_00214
crossref_primary_10_3389_fspor_2024_1326807
crossref_primary_10_3390_s20195453
crossref_primary_10_1109_TVCG_2019_2898650
crossref_primary_10_3390_civileng4010013
crossref_primary_10_1080_0960085X_2024_2308541
crossref_primary_10_3390_math12193039
crossref_primary_10_3390_s20236829
crossref_primary_10_1002_cav_1995
crossref_primary_10_1016_j_ifacol_2024_11_022
crossref_primary_10_1016_j_patcog_2023_110175
crossref_primary_10_1109_TBME_2020_3026464
crossref_primary_10_1109_ACCESS_2023_3299323
crossref_primary_10_1109_TPAMI_2020_3029700
crossref_primary_10_3390_s21092983
crossref_primary_10_1109_TIP_2020_3013801
crossref_primary_10_20965_ijat_2023_p0217
crossref_primary_10_3390_mti4020033
crossref_primary_10_1109_JBHI_2023_3311448
crossref_primary_10_1016_j_neuron_2020_09_017
crossref_primary_10_3390_electronics14061078
crossref_primary_10_1109_TCE_2024_3363616
crossref_primary_10_3390_bioengineering12010032
crossref_primary_10_1007_s11263_020_01306_1
crossref_primary_10_1109_TVCG_2023_3247088
crossref_primary_10_3390_e23050588
crossref_primary_10_1080_00140139_2024_2308705
crossref_primary_10_1145_3272127_3275108
crossref_primary_10_1109_LSENS_2023_3307122
crossref_primary_10_20965_jaciii_2023_p0915
crossref_primary_10_1016_j_cag_2019_03_013
crossref_primary_10_1109_JSEN_2024_3485226
crossref_primary_10_1007_s11263_019_01270_5
crossref_primary_10_1109_TVCG_2019_2898748
crossref_primary_10_1109_ACCESS_2024_3427651
crossref_primary_10_1007_s00371_020_01853_1
crossref_primary_10_1109_JSEN_2024_3401861
crossref_primary_10_1111_cgf_14634
crossref_primary_10_1016_j_ifacol_2020_12_403
crossref_primary_10_3390_ijerph17082635
crossref_primary_10_1007_s12193_021_00386_8
crossref_primary_10_3390_s19020282
crossref_primary_10_1145_3625264
crossref_primary_10_1007_s11263_018_1118_y
crossref_primary_10_3390_a13100266
crossref_primary_10_3390_s22134846
crossref_primary_10_3389_fcomp_2024_1347424
crossref_primary_10_1007_s00521_019_04658_z
crossref_primary_10_1016_j_conengprac_2022_105206
crossref_primary_10_1109_JSEN_2023_3279858
crossref_primary_10_1111_cgf_14767
crossref_primary_10_1109_TVCG_2019_2930691
crossref_primary_10_1109_JSEN_2021_3096078
crossref_primary_10_1371_journal_pone_0253157
crossref_primary_10_1080_24725838_2025_2469076
crossref_primary_10_3390_s19173716
crossref_primary_10_1002_aisy_202100045
Cites_doi 10.1109/ICCV.2013.141
10.1145/1966394.1966397
10.1007/s11263-012-0601-0
10.1109/TPAMI.2013.248
10.1145/2766993
10.1111/cgf.12700
10.1109/TPAMI.2016.2522398
10.1109/WACV.2009.5403056
10.1145/1944745.1944768
10.1109/ICCV.2011.6126375
10.1109/CVPR.2015.7298751
10.1109/CVPR.2009.5206755
10.1007/978-3-642-10470-1_14
10.1007/s11263-009-0273-6
10.1145/2897824.2925981
10.1145/1239451.1239486
10.1145/2897824.2925965
ContentType Journal Article
Copyright 2017 The Author(s) Computer Graphics Forum © 2017 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
2017 The Eurographics Association and John Wiley & Sons Ltd.
Copyright_xml – notice: 2017 The Author(s) Computer Graphics Forum © 2017 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
– notice: 2017 The Eurographics Association and John Wiley & Sons Ltd.
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1111/cgf.13131
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Computer and Information Systems Abstracts
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1467-8659
EndPage 360
ExternalDocumentID 10_1111_cgf_13131
CGF13131
Genre article
GroupedDBID .3N
.4S
.DC
.GA
.Y3
05W
0R~
10A
15B
1OB
1OC
29F
31~
33P
3SF
4.4
50Y
50Z
51W
51X
52M
52N
52O
52P
52S
52T
52U
52W
52X
5GY
5HH
5LA
5VS
66C
6J9
702
7PT
8-0
8-1
8-3
8-4
8-5
8UM
8VB
930
A03
AAESR
AAEVG
AAHHS
AAHQN
AAMNL
AANHP
AANLZ
AAONW
AASGY
AAXRX
AAYCA
AAZKR
ABCQN
ABCUV
ABDBF
ABDPE
ABEML
ABPVW
ACAHQ
ACBWZ
ACCFJ
ACCZN
ACFBH
ACGFS
ACPOU
ACRPL
ACSCC
ACUHS
ACXBN
ACXQS
ACYXJ
ADBBV
ADEOM
ADIZJ
ADKYN
ADMGS
ADNMO
ADOZA
ADXAS
ADZMN
ADZOD
AEEZP
AEGXH
AEIGN
AEIMD
AEMOZ
AENEX
AEQDE
AEUQT
AEUYR
AFBPY
AFEBI
AFFNX
AFFPM
AFGKR
AFPWT
AFWVQ
AFZJQ
AHBTC
AHEFC
AHQJS
AITYG
AIURR
AIWBW
AJBDE
AJXKR
AKVCP
ALAGY
ALMA_UNASSIGNED_HOLDINGS
ALUQN
ALVPJ
AMBMR
AMYDB
ARCSS
ASPBG
ATUGU
AUFTA
AVWKF
AZBYB
AZFZN
AZVAB
BAFTC
BDRZF
BFHJK
BHBCM
BMNLL
BMXJE
BNHUX
BROTX
BRXPI
BY8
CAG
COF
CS3
CWDTD
D-E
D-F
DCZOG
DPXWK
DR2
DRFUL
DRSTM
DU5
EAD
EAP
EBA
EBO
EBR
EBS
EBU
EDO
EJD
EMK
EST
ESX
F00
F01
F04
F5P
FEDTE
FZ0
G-S
G.N
GODZA
H.T
H.X
HF~
HGLYW
HVGLF
HZI
HZ~
I-F
IHE
IX1
J0M
K1G
K48
LATKE
LC2
LC3
LEEKS
LH4
LITHE
LOXES
LP6
LP7
LUTES
LW6
LYRES
MEWTI
MK4
MRFUL
MRSTM
MSFUL
MSSTM
MXFUL
MXSTM
N04
N05
N9A
NF~
O66
O9-
OIG
P2W
P2X
P4D
PALCI
PQQKQ
Q.N
Q11
QB0
QWB
R.K
RDJ
RIWAO
RJQFR
ROL
RX1
SAMSI
SUPJJ
TH9
TN5
TUS
UB1
V8K
W8V
W99
WBKPD
WIH
WIK
WOHZO
WQJ
WRC
WXSBR
WYISQ
WZISG
XG1
ZL0
ZZTAW
~IA
~IF
~WT
AAMMB
AAYXX
ADMLS
AEFGJ
AEYWJ
AGHNM
AGQPQ
AGXDD
AGYGG
AIDQK
AIDYY
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c4331-35b254447d6470bf9ac6475c7dded68085b253850b515f4463d30fd744d83d8a3
IEDL.DBID DR2
ISSN 0167-7055
IngestDate Fri Jul 25 02:45:07 EDT 2025
Thu Apr 24 22:57:24 EDT 2025
Thu Jul 03 08:28:03 EDT 2025
Wed Jan 22 16:30:01 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 2
Language English
License http://onlinelibrary.wiley.com/termsAndConditions
http://doi.wiley.com/10.1002/tdm_license_1
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c4331-35b254447d6470bf9ac6475c7dded68085b253850b515f4463d30fd744d83d8a3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
PQID 1901406546
PQPubID 30877
PageCount 12
ParticipantIDs proquest_journals_1901406546
crossref_primary_10_1111_cgf_13131
crossref_citationtrail_10_1111_cgf_13131
wiley_primary_10_1111_cgf_13131_CGF13131
PublicationCentury 2000
PublicationDate May 2017
PublicationDateYYYYMMDD 2017-05-01
PublicationDate_xml – month: 05
  year: 2017
  text: May 2017
PublicationDecade 2010
PublicationPlace Oxford
PublicationPlace_xml – name: Oxford
PublicationTitle Computer graphics forum
PublicationYear 2017
Publisher Blackwell Publishing Ltd
Publisher_xml – name: Blackwell Publishing Ltd
References 2015; 34
2010; 87
26
2011
2010
2008; 27
2009
2008
2011; 30
2013; 103
2014; 36
2007
2016
1994
2015
2013
2016; 38
2016; 35
2005; 24
Pons-Moll (10.1111/cgf.13131-BIB0020|cgf13131-cit-0020) 2011
Loper (10.1111/cgf.13131-BIB0012|cgf13131-cit-0012) 2015; 34
Pons-Moll (10.1111/cgf.13131-BIB0021|cgf13131-cit-0021) 2015; 34
Chai (10.1111/cgf.13131-BIB0005|cgf13131-cit-0005) 2005; 24
Taylor (10.1111/cgf.13131-BIB0030|cgf13131-cit-0030) 2016; 35
Tagliasacchi (10.1111/cgf.13131-BIB0031|cgf13131-cit-0031) 2015; 34
10.1111/cgf.13131-BIB0035|cgf13131-cit-0035
Bogo (10.1111/cgf.13131-BIB0004|cgf13131-cit-0004) 2016
10.1111/cgf.13131-BIB0027|cgf13131-cit-0027
10.1111/cgf.13131-BIB0008|cgf13131-cit-0008
Marcard (10.1111/cgf.13131-BIB0017|cgf13131-cit-0017) 2016; 38
Hartley (10.1111/cgf.13131-BIB0010|cgf13131-cit-0010) 2013; 103
10.1111/cgf.13131-BIB0033|cgf13131-cit-0033
10.1111/cgf.13131-BIB0036|cgf13131-cit-0036
Shiratori (10.1111/cgf.13131-BIB0028|cgf13131-cit-0028) 2011; 30
10.1111/cgf.13131-BIB0001|cgf13131-cit-0001
Murray (10.1111/cgf.13131-BIB0015|cgf13131-cit-0015) 1994
Streuber (10.1111/cgf.13131-BIB0029|cgf13131-cit-0029) 2016; 35
10.1111/cgf.13131-BIB0016|cgf13131-cit-0016
10.1111/cgf.13131-BIB0022|cgf13131-cit-0022
10.1111/cgf.13131-BIB0007|cgf13131-cit-0007
Ionescu (10.1111/cgf.13131-BIB0011|cgf13131-cit-0011) 2014; 36
Sigal (10.1111/cgf.13131-BIB0024|cgf13131-cit-0024) 2010; 87
Tautges (10.1111/cgf.13131-BIB0032|cgf13131-cit-0032) 2011; 30
Pons-Moll (10.1111/cgf.13131-BIB0019|cgf13131-cit-0019) 2009
10.1111/cgf.13131-BIB0025|cgf13131-cit-0025
Baak (10.1111/cgf.13131-BIB0003|cgf13131-cit-0003) 2010
10.1111/cgf.13131-BIB0018|cgf13131-cit-0018
10.1111/cgf.13131-BIB0014|cgf13131-cit-0014
Liu (10.1111/cgf.13131-BIB0013|cgf13131-cit-0013) 2011
Vlasic (10.1111/cgf.13131-BIB0034|cgf13131-cit-0034) 2008; 27
Andrews (10.1111/cgf.13131-BIB0002|cgf13131-cit-0002) 2016
10.1111/cgf.13131-BIB0023|cgf13131-cit-0023
10.1111/cgf.13131-BIB0006|cgf13131-cit-0006
10.1111/cgf.13131-BIB0026|cgf13131-cit-0026
10.1111/cgf.13131-BIB0009|cgf13131-cit-0009
References_xml – start-page: 1105
  year: 2013
  end-page: 1112
– volume: 36
  start-page: 1325
  issue: 7
  year: 2014
  end-page: 1339
  article-title: Human3.6M: Large scale datasets and predictive methods for 3D human sensing in natural environments
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
– year: 2007
– start-page: 1243
  end-page: 1250
– start-page: 1
  year: 2009
  end-page: 8
– start-page: 159
  year: 2009
  end-page: 172
– year: 2016
– start-page: 133
  year: 2011
  end-page: 140
– start-page: 162
– year: 1994
– volume: 34
  start-page: 120
  issue: 4
  year: 2015
  article-title: Dyna: A model of dynamic human shape in motion
  publication-title: ACM Transactions on Graphics, (Proc. SIGGRAPH)
– volume: 24
  start-page: 686
  year: 2005
  end-page: 696
– volume: 30
  start-page: 18
  issue: 3
  year: 2011
  article-title: Motion reconstruction using sparse accelerometer data
  publication-title: ACM Transactions on Graphics (TOG)
– volume: 26
  start-page: 35
– start-page: 5
  year: 2016
– volume: 38
  start-page: 1533
  issue: 8
  year: 2016
  end-page: 1547
  article-title: Human pose estimation from video and IMUs
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
– volume: 30
  start-page: 31
  year: 2011
– start-page: 1746
  year: 2009
  end-page: 1753
– volume: 27
  start-page: 97
  year: 2008
– volume: 35
  start-page: 143
  issue: 4
  year: 2016
  article-title: Efficient and precise interactive hand tracking through joint, continuous optimization of pose and correspondences
  publication-title: ACM Transactions on Graphics (TOG)
– start-page: 139
  year: 2011
  end-page: 170
– year: 2008
– start-page: 139
  year: 2010
  end-page: 152
– volume: 87
  start-page: 4
  issue: 1
  year: 2010
  end-page: 27
  article-title: Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion
  publication-title: International Journal on Computer Vision (IJCV)
– volume: 103
  start-page: 267
  year: 2013
  end-page: 305
  article-title: Rotation averaging
  publication-title: International Journal of Computer Vision
– volume: 35
  start-page: 54:1
  issue: 4
  year: 2016
  end-page: 54:14
  article-title: Body Talk: Crowdshaping realistic 3D avatars with words
  publication-title: ACM Trans. Graph. (Proc. SIGGRAPH)
– volume: 34
  start-page: 248:1
  issue: 6
  year: 2015
  end-page: 248:16
  article-title: SMPL: A skinned multi‐person linear model
  publication-title: ACM Trans. Graphics (Proc. SIGGRAPH Asia)
– volume: 34
  start-page: 101
  year: 2015
  end-page: 114
  article-title: Robust articulated‐ICP for real‐time hand tracking
  publication-title: Computer Graphics Forum
– start-page: 1446
  year: 2015
  end-page: 1455
– ident: 10.1111/cgf.13131-BIB0009|cgf13131-cit-0009
  doi: 10.1109/ICCV.2013.141
– start-page: 139
  volume-title: Model-Based Pose Estimation
  year: 2011
  ident: 10.1111/cgf.13131-BIB0020|cgf13131-cit-0020
– volume: 30
  start-page: 18
  issue: 3
  year: 2011
  ident: 10.1111/cgf.13131-BIB0032|cgf13131-cit-0032
  article-title: Motion reconstruction using sparse accelerometer data
  publication-title: ACM Transactions on Graphics (TOG)
  doi: 10.1145/1966394.1966397
– start-page: 139
  volume-title: European Conference on Computer Vision
  year: 2010
  ident: 10.1111/cgf.13131-BIB0003|cgf13131-cit-0003
– ident: 10.1111/cgf.13131-BIB0008|cgf13131-cit-0008
– volume: 34
  start-page: 248:1
  issue: 6
  year: 2015
  ident: 10.1111/cgf.13131-BIB0012|cgf13131-cit-0012
  article-title: SMPL: A skinned multi-person linear model
  publication-title: ACM Trans. Graphics (Proc. SIGGRAPH Asia)
– volume-title: Lecture Notes in Computer Science
  year: 2016
  ident: 10.1111/cgf.13131-BIB0004|cgf13131-cit-0004
– volume: 103
  start-page: 267
  year: 2013
  ident: 10.1111/cgf.13131-BIB0010|cgf13131-cit-0010
  article-title: Rotation averaging
  publication-title: International Journal of Computer Vision
  doi: 10.1007/s11263-012-0601-0
– volume: 36
  start-page: 1325
  issue: 7
  year: 2014
  ident: 10.1111/cgf.13131-BIB0011|cgf13131-cit-0011
  article-title: Human3.6M: Large scale datasets and predictive methods for 3D human sensing in natural environments
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
  doi: 10.1109/TPAMI.2013.248
– volume: 34
  start-page: 120
  issue: 4
  year: 2015
  ident: 10.1111/cgf.13131-BIB0021|cgf13131-cit-0021
  article-title: Dyna: A model of dynamic human shape in motion
  publication-title: ACM Transactions on Graphics, (Proc. SIGGRAPH)
  doi: 10.1145/2766993
– ident: 10.1111/cgf.13131-BIB0022|cgf13131-cit-0022
– volume: 34
  start-page: 101
  year: 2015
  ident: 10.1111/cgf.13131-BIB0031|cgf13131-cit-0031
  article-title: Robust articulated-ICP for real-time hand tracking
  publication-title: Computer Graphics Forum
  doi: 10.1111/cgf.12700
– ident: 10.1111/cgf.13131-BIB0016|cgf13131-cit-0016
– ident: 10.1111/cgf.13131-BIB0006|cgf13131-cit-0006
– volume: 38
  start-page: 1533
  issue: 8
  year: 2016
  ident: 10.1111/cgf.13131-BIB0017|cgf13131-cit-0017
  article-title: Human pose estimation from video and IMUs
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
  doi: 10.1109/TPAMI.2016.2522398
– ident: 10.1111/cgf.13131-BIB0025|cgf13131-cit-0025
– start-page: 1
  volume-title: Applications of Computer Vision (WACV), 2009 Workshop on
  year: 2009
  ident: 10.1111/cgf.13131-BIB0019|cgf13131-cit-0019
  doi: 10.1109/WACV.2009.5403056
– ident: 10.1111/cgf.13131-BIB0035|cgf13131-cit-0035
– ident: 10.1111/cgf.13131-BIB0036|cgf13131-cit-0036
– start-page: 133
  volume-title: Symposium on Interactive 3D Graphics and Games
  year: 2011
  ident: 10.1111/cgf.13131-BIB0013|cgf13131-cit-0013
  doi: 10.1145/1944745.1944768
– ident: 10.1111/cgf.13131-BIB0026|cgf13131-cit-0026
– volume: 30
  start-page: 31
  volume-title: ACM Transactions on Graphics (TOG)
  year: 2011
  ident: 10.1111/cgf.13131-BIB0028|cgf13131-cit-0028
– ident: 10.1111/cgf.13131-BIB0014|cgf13131-cit-0014
– ident: 10.1111/cgf.13131-BIB0018|cgf13131-cit-0018
  doi: 10.1109/ICCV.2011.6126375
– ident: 10.1111/cgf.13131-BIB0001|cgf13131-cit-0001
  doi: 10.1109/CVPR.2015.7298751
– ident: 10.1111/cgf.13131-BIB0007|cgf13131-cit-0007
  doi: 10.1109/CVPR.2009.5206755
– start-page: 5
  volume-title: Proceedings of the 13th European Conference on Visual Media Production (CVMP 2016)
  year: 2016
  ident: 10.1111/cgf.13131-BIB0002|cgf13131-cit-0002
– volume: 24
  start-page: 686
  volume-title: ACM Transactions on Graphics (TOG)
  year: 2005
  ident: 10.1111/cgf.13131-BIB0005|cgf13131-cit-0005
– ident: 10.1111/cgf.13131-BIB0023|cgf13131-cit-0023
– volume-title: A mathematical introduction to robotic manipulation
  year: 1994
  ident: 10.1111/cgf.13131-BIB0015|cgf13131-cit-0015
– ident: 10.1111/cgf.13131-BIB0027|cgf13131-cit-0027
  doi: 10.1007/978-3-642-10470-1_14
– volume: 87
  start-page: 4
  issue: 1
  year: 2010
  ident: 10.1111/cgf.13131-BIB0024|cgf13131-cit-0024
  article-title: Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion
  publication-title: International Journal on Computer Vision (IJCV)
  doi: 10.1007/s11263-009-0273-6
– volume: 27
  start-page: 97
  volume-title: ACM Transactions on Graphics (TOG)
  year: 2008
  ident: 10.1111/cgf.13131-BIB0034|cgf13131-cit-0034
– volume: 35
  start-page: 54:1
  issue: 4
  year: 2016
  ident: 10.1111/cgf.13131-BIB0029|cgf13131-cit-0029
  article-title: Body Talk: Crowdshaping realistic 3D avatars with words
  publication-title: ACM Trans. Graph. (Proc. SIGGRAPH)
  doi: 10.1145/2897824.2925981
– ident: 10.1111/cgf.13131-BIB0033|cgf13131-cit-0033
  doi: 10.1145/1239451.1239486
– volume: 35
  start-page: 143
  issue: 4
  year: 2016
  ident: 10.1111/cgf.13131-BIB0030|cgf13131-cit-0030
  article-title: Efficient and precise interactive hand tracking through joint, continuous optimization of pose and correspondences
  publication-title: ACM Transactions on Graphics (TOG)
  doi: 10.1145/2897824.2925965
SSID ssj0004765
Score 2.5717363
Snippet We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the...
SourceID proquest
crossref
wiley
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 349
SubjectTerms Categories and Subject Descriptors (according to ACM CCS)
Constraint modelling
Datasets
Head movement
Human motion
I.3.3 [Computer Graphics]: Three‐Dimensional Graphics and Realism—Animation
Inertial sensing devices
Motion capture
Optimization
Pose estimation
Sensors
Video data
Title Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs
URI https://onlinelibrary.wiley.com/doi/abs/10.1111%2Fcgf.13131
https://www.proquest.com/docview/1901406546
Volume 36
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8NAEF5KT3rwLVarLOLBS0rT3WQTPZXaWgsVUQs9CCH7SA9KWmx68dc7s0naKgriLbCzeezO7H6zmfmGkIuQt5LYNZ4Tc9F0OFe-E7jGd1pxIPDHnTAazyGH935_xAdjb1wh12UuTM4PsTxwQ8uw6zUaeCzna0auJknDZa7NocZYLQREjyvqKC58r-T1RsaYglUIo3iWPb_uRSuAuQ5T7T7T2yYv5Rvm4SWvjUUmG-rjG3njPz9hh2wV-JO2c4XZJRWT7pHNNVbCfTJ4moG3a-hdijHXIPwwBT29ou1FNrUEr5TdUHv4b1toF1aJPAGSYrIKLbsPR_MDMup1nzt9pyi54ChMnXKYJ5GzjAvtw9zJJIwVXHhKwCqosUoHtrPAa0rAQQm4kkyzZqIF5zpgOojZIamm09QcESoF6IevwsRX4BWFBqCd9pAvTuIxsYpr5LIc_EgVfORYFuMtKv0SGJ7IDk-NnC9FZzkJx09C9XIGo8IO5xHCHW4rvsPj7FT8foOoc9uzF8d_Fz0hGy3c520EZJ1Us_eFOQWUkskzq46fLCLfgA
linkProvider Wiley-Blackwell
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1LT8MwDLbGOAAH3ojHgAiBxKXT1qYvJA7THmyDIQRM4lbaNOUA6hDbhOA38Vf4T9hpuw0EEhcOqJdIcdM0jhPbcT4D7Ltcj_yyNDWf2yWNc2FpTllamu47Nh3c2TIkP2Tn3Gp2efvGvMnBW3YXJsGHGDncSDLUek0CTg7pCSkXd1GxbOCThlSeypdnNNj6x60acvdA1xv162pTS3MKaILuBmmGGRAoF7dDCzsXRK4vsGAKG8U8pDQUVG84ZinAjT5CW8kIjVIU2pyHjhE6voHtTsE0ZRAnpP7a5RisituWmSGJE0ZNimNEcUOjrn7e_cYq7aRirHa2xgK8Z2OSBLTcF4eDoChev8BF_pdBW4T5VMVmlUQmliAn42WYmwBeXIH21SMa9JK1YgorR-KLHoriEasMBz2FYcuMGlPnG6qG1XEhTO54MrqPw7LXO93-KnT_5GfWIB_3YrkOLLBRBCzhRpZAw8-VqL2GJkHiBeQJF_4GHGbc9kQKuU6ZPx68zPRCdniKHRuwNyJ9THBGviMqZFPGS5eavkcaHVdJ7fFzivc_N-BVTxqqsPl70l2YaV53zryz1vnpFszqpNaogM8C5AdPQ7mNStkg2FGywOD2r-fRB20sOqc
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1LSwMxEB6qgujBt1ifQRS8bGl3sy_BQ-lDWx-IWvC27iZZD0pbbIvoX_Kv-KOcye62VRS8eJC9BDKbzWYyyUwy8w3Ans_NOCwp2wi5WzQ4F47hlZRjmKHn0sWdqySdQ55fOCct3ry1b3PwlsXCJPgQwwM3kgy9XpOAd2U8JuTiPi6ULHxSj8pT9fKM9lrvqFFF5u6bZr12Uzkx0pQChqDQIMOyI8Lk4q50sG9R7IcCC7ZwUcolZaGgesuzixHu8zGaSpa0irF0OZeeJb3QwnYnYIo7RZ_yRFSvRlhV3HXsDEicIGpSGCNyGxp29fPmN9Jox_VivbHV5-E9G5LEn-WhMOhHBfH6BS3yn4zZAsylCjYrJxKxCDnVXoLZMdjFZWhed9GcV6zRJqdyJL7soCAesvKg39EItsyqMn27oWtYDZfBJMKTUTQOy14_b_VWoPUnP7MKk-1OW60Bi1wUAEf4sSPQ7PMV6q7SJkC8iM7BRZiHg4zZgUgB1ynvx2OQGV7IjkCzIw-7Q9JugjLyHdFmNmOCdKHpBaTPcZ3SHj-nWf9zA0HluK4L678n3YHpy2o9OGtcnG7AjEk6jfb23ITJ_tNAbaFG1o-2tSQwuPvrafQBVbo5Vg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Sparse+Inertial+Poser%3A+Automatic+3D+Human+Pose+Estimation+from+Sparse+IMUs&rft.jtitle=Computer+graphics+forum&rft.au=von+Marcard%2C+T&rft.au=Rosenhahn%2C+B&rft.au=Black%2C+M+J&rft.au=Pons-Moll%2C+G&rft.date=2017-05-01&rft.pub=Blackwell+Publishing+Ltd&rft.issn=0167-7055&rft.eissn=1467-8659&rft.volume=36&rft.issue=2&rft.spage=349&rft_id=info:doi/10.1111%2Fcgf.13131&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0167-7055&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0167-7055&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0167-7055&client=summon