ExpressGesture: Expressive gesture generation from speech through database matching

Co‐speech gestures are a vital ingredient in making virtual agents more human‐like and engaging. Automatically generated gestures based on speech‐input often lack realistic and defined gesture form. We present a database‐driven approach guaranteeing defined gesture form. We built a large corpus of o...

Full description

Saved in:
Bibliographic Details
Published inComputer animation and virtual worlds Vol. 32; no. 3-4
Main Authors Ferstl, Ylva, Neff, Michael, McDonnell, Rachel
Format Journal Article
LanguageEnglish
Published Chichester Wiley Subscription Services, Inc 01.06.2021
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Co‐speech gestures are a vital ingredient in making virtual agents more human‐like and engaging. Automatically generated gestures based on speech‐input often lack realistic and defined gesture form. We present a database‐driven approach guaranteeing defined gesture form. We built a large corpus of over 23,000 motion‐captured co‐speech gestures and select individual gestures based on expressive gesture characteristics that can be estimated from speech audio. The expressive parameters are gesture velocity and acceleration, gesture size, arm swivel, and finger extension. Individual, parameter‐matched gestures are then combined into animated sequences. We evaluate our gesture generation system in two perceptual studies. The first study compares our method to the ground truth gestures as well as mismatched gestures. The second study compares our method to five current generative machine learning models. Our method outperformed mismatched gesture selection in the first study and showed competitive performance in the second. We present a system for automatic gesture generation from speech audio, together with a database of over 23,000 motion‐captured gestures. Using a hybrid approach of machine learning and database‐sampling, our system guarantees defined gesture form. Our system selects individual gestures based on expressive gesture characteristics that can be estimated from speech audio and then combines selected gestures into animated sequences. Our method outperforms mismatched gesture selection and shows competitive performance to current generative machine learning models.
AbstractList Co‐speech gestures are a vital ingredient in making virtual agents more human‐like and engaging. Automatically generated gestures based on speech‐input often lack realistic and defined gesture form. We present a database‐driven approach guaranteeing defined gesture form. We built a large corpus of over 23,000 motion‐captured co‐speech gestures and select individual gestures based on expressive gesture characteristics that can be estimated from speech audio. The expressive parameters are gesture velocity and acceleration, gesture size, arm swivel, and finger extension. Individual, parameter‐matched gestures are then combined into animated sequences. We evaluate our gesture generation system in two perceptual studies. The first study compares our method to the ground truth gestures as well as mismatched gestures. The second study compares our method to five current generative machine learning models. Our method outperformed mismatched gesture selection in the first study and showed competitive performance in the second. We present a system for automatic gesture generation from speech audio, together with a database of over 23,000 motion‐captured gestures. Using a hybrid approach of machine learning and database‐sampling, our system guarantees defined gesture form. Our system selects individual gestures based on expressive gesture characteristics that can be estimated from speech audio and then combines selected gestures into animated sequences. Our method outperforms mismatched gesture selection and shows competitive performance to current generative machine learning models.
Co‐speech gestures are a vital ingredient in making virtual agents more human‐like and engaging. Automatically generated gestures based on speech‐input often lack realistic and defined gesture form. We present a database‐driven approach guaranteeing defined gesture form. We built a large corpus of over 23,000 motion‐captured co‐speech gestures and select individual gestures based on expressive gesture characteristics that can be estimated from speech audio. The expressive parameters are gesture velocity and acceleration, gesture size, arm swivel, and finger extension. Individual, parameter‐matched gestures are then combined into animated sequences. We evaluate our gesture generation system in two perceptual studies. The first study compares our method to the ground truth gestures as well as mismatched gestures. The second study compares our method to five current generative machine learning models. Our method outperformed mismatched gesture selection in the first study and showed competitive performance in the second.
Author Ferstl, Ylva
Neff, Michael
McDonnell, Rachel
Author_xml – sequence: 1
  givenname: Ylva
  orcidid: 0000-0001-7259-0378
  surname: Ferstl
  fullname: Ferstl, Ylva
  email: yferstl@tcd.ie
  organization: Trinity College Dublin
– sequence: 2
  givenname: Michael
  surname: Neff
  fullname: Neff, Michael
  organization: University of California
– sequence: 3
  givenname: Rachel
  surname: McDonnell
  fullname: McDonnell, Rachel
  organization: Trinity College Dublin
BookMark eNp1kMtOwkAUhicGEwFNfIQmbtwU59J2pu4IQTQhcSEx7ibDzCktgbbOtCBv70CJC6Orc8n3n8s_QL2yKgGhW4JHBGP6oNVuRDFJLlCfxFESRpR_9H7yhFyhgXNrTyaU4D56m37VFpybgWtaC4_BuS52EKy6no8lWNUUVRlkttoGrgbQedDktmpXeWBUo5bKQbBVjc6LcnWNLjO1cXBzjkO0eJouJs_h_HX2MhnPQ81SloREUCKyjEeKishkKVapIqAV5gxHy1TES8Yzw41hBpjWKSNCCWoYcM6FYWyI7rqxta0-W3-rXFetLf1GSeMYUypwmnpq1FHaVs5ZyKQumtMzjVXFRhIsj75J75s8-uYF978EtS22yh7-QsMO3RcbOPzLycn4_cR_A77afvo
CitedBy_id crossref_primary_10_1016_j_neucom_2024_127831
crossref_primary_10_1016_j_robot_2022_104154
crossref_primary_10_1145_3694905
crossref_primary_10_3390_electronics13163315
crossref_primary_10_1007_s10055_024_00941_0
crossref_primary_10_1145_3549530
crossref_primary_10_1145_3656374
crossref_primary_10_1145_3658134
Cites_doi 10.1111/cgf.13946
10.1111/cgf.14114
10.1109/TAFFC.2015.2457417
10.1007/978-3-642-15892-6_24
10.1145/3383652.3423882
10.1145/2485895.2485900
10.1145/1873951.1874246
10.1007/978-3-642-04380-2_12
10.1016/j.specom.2013.06.005
10.1145/2448196.2448199
10.1145/3439795
10.1145/3382507.3418815
10.1145/1015706.1015753
10.1109/CVPR.2019.00361
10.1515/9783110813098.207
10.1007/978-3-642-40415-3_33
10.1145/3267851.3267898
10.1145/1330511.1330516
10.1109/TCYB.2020.2966730
10.1145/3386569.3392440
10.1002/cav.1674
10.1016/j.cag.2020.04.007
ContentType Journal Article
Copyright 2021 The Authors. published by John Wiley & Sons, Ltd.
2021. This article is published under http://creativecommons.org/licenses/by-nc/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: 2021 The Authors. published by John Wiley & Sons, Ltd.
– notice: 2021. This article is published under http://creativecommons.org/licenses/by-nc/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID 24P
AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1002/cav.2016
DatabaseName Wiley Online Library Open Access
CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Computer and Information Systems Abstracts
CrossRef
Database_xml – sequence: 1
  dbid: 24P
  name: Wiley Online Library Open Access
  url: https://authorservices.wiley.com/open-science/open-access/browse-journals.html
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Visual Arts
EISSN 1546-427X
EndPage n/a
ExternalDocumentID 10_1002_cav_2016
CAV2016
Genre article
GrantInformation_xml – fundername: Science Foundation Ireland
  funderid: 13/RC/2106
GroupedDBID .3N
.4S
.DC
.GA
.Y3
05W
0R~
10A
1L6
1OC
24P
29F
31~
33P
3SF
3WU
4.4
50Y
50Z
51W
51X
52M
52N
52O
52P
52S
52T
52U
52W
52X
5GY
5VS
66C
6J9
702
7PT
8-0
8-1
8-3
8-4
8-5
930
A03
AAESR
AAEVG
AAHHS
AAHQN
AAMNL
AANHP
AANLZ
AAONW
AASGY
AAXRX
AAYCA
AAZKR
ABCQN
ABCUV
ABEML
ABIJN
ABPVW
ACAHQ
ACBWZ
ACCFJ
ACCZN
ACGFS
ACPOU
ACRPL
ACSCC
ACXBN
ACXQS
ACYXJ
ADBBV
ADEOM
ADIZJ
ADKYN
ADMGS
ADNMO
ADOZA
ADXAS
ADZMN
ADZOD
AEEZP
AEIGN
AEIMD
AENEX
AEQDE
AEUQT
AEUYR
AFBPY
AFFPM
AFGKR
AFPWT
AFWVQ
AFZJQ
AHBTC
AITYG
AIURR
AIWBW
AJBDE
AJXKR
ALMA_UNASSIGNED_HOLDINGS
ALUQN
ALVPJ
AMBMR
AMYDB
ARCSS
ASPBG
ATUGU
AUFTA
AVWKF
AZBYB
AZFZN
AZVAB
BAFTC
BDRZF
BFHJK
BHBCM
BMNLL
BROTX
BRXPI
BY8
CS3
D-E
D-F
DCZOG
DPXWK
DR2
DRFUL
DRSTM
DU5
EBS
EDO
EJD
F00
F01
F04
F5P
FEDTE
G-S
G.N
GNP
GODZA
HF~
HGLYW
HHY
HVGLF
HZ~
I-F
ITG
ITH
IX1
J0M
JPC
KQQ
LATKE
LAW
LC2
LC3
LEEKS
LH4
LITHE
LOXES
LP6
LP7
LUTES
LW6
LYRES
MEWTI
MK4
MRFUL
MRSTM
MSFUL
MSSTM
MXFUL
MXSTM
N9A
NF~
O66
O9-
OIG
P2W
P4D
PQQKQ
Q.N
Q11
QB0
QRW
R.K
ROL
RWI
RX1
RYL
SUPJJ
TN5
TUS
UB1
V2E
V8K
W8V
W99
WBKPD
WIH
WIK
WQJ
WRC
WXSBR
WYISQ
WZISG
XG1
XV2
~IA
~WT
AAYXX
ADMLS
AGHNM
AGQPQ
AGYGG
CITATION
7SC
8FD
AAMMB
AEFGJ
AGXDD
AIDQK
AIDYY
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c3936-18218ff74a284df90a9a1eca07304b985b37fd7dd3de3cc9318a82d3e7778d33
IEDL.DBID DR2
ISSN 1546-4261
IngestDate Fri Jul 25 05:48:42 EDT 2025
Thu Apr 24 22:55:46 EDT 2025
Tue Jul 01 02:42:23 EDT 2025
Wed Jan 22 16:28:15 EST 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 3-4
Language English
License Attribution-NonCommercial
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c3936-18218ff74a284df90a9a1eca07304b985b37fd7dd3de3cc9318a82d3e7778d33
Notes Funding information
Science Foundation Ireland, 13/RC/2106
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-7259-0378
OpenAccessLink https://proxy.k.utb.cz/login?url=https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.2016
PQID 2550228099
PQPubID 2034909
PageCount 11
ParticipantIDs proquest_journals_2550228099
crossref_citationtrail_10_1002_cav_2016
crossref_primary_10_1002_cav_2016
wiley_primary_10_1002_cav_2016_CAV2016
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate June-July 2021
PublicationDateYYYYMMDD 2021-06-01
PublicationDate_xml – month: 06
  year: 2021
  text: June-July 2021
PublicationDecade 2020
PublicationPlace Chichester
PublicationPlace_xml – name: Chichester
PublicationTitle Computer animation and virtual worlds
PublicationYear 2021
Publisher Wiley Subscription Services, Inc
Publisher_xml – name: Wiley Subscription Services, Inc
References 1980; 25
2016; 7
2001
2021
2010; 29
2010
2020
2020; 50
2008; 27
2004; 23
2009
2020; 39
2019
2008
2018
Estoril, Portugal; 2008
2014; 57
2005
2015
2013
2016; 27
1972; 7
2021; 40
e_1_2_7_6_1
e_1_2_7_5_1
Cassell J (e_1_2_7_4_1) 2001
e_1_2_7_3_1
e_1_2_7_9_1
e_1_2_7_8_1
e_1_2_7_7_1
e_1_2_7_19_1
e_1_2_7_18_1
e_1_2_7_16_1
Kendon A (e_1_2_7_23_1) 1972; 7
e_1_2_7_2_1
e_1_2_7_15_1
e_1_2_7_14_1
e_1_2_7_13_1
Levine S (e_1_2_7_17_1) 2010; 29
e_1_2_7_12_1
e_1_2_7_11_1
e_1_2_7_10_1
e_1_2_7_26_1
e_1_2_7_27_1
e_1_2_7_28_1
e_1_2_7_29_1
e_1_2_7_30_1
e_1_2_7_25_1
e_1_2_7_31_1
e_1_2_7_24_1
e_1_2_7_32_1
e_1_2_7_22_1
e_1_2_7_21_1
e_1_2_7_20_1
References_xml – volume: 40
  start-page: 1
  issue: 1
  year: 2021
  end-page: 16
  article-title: A conversational agent framework with multi‐modal personality expression
  publication-title: ACM Trans Graph
– year: 2009
– volume: 25
  start-page: 207
  issue: 1980
  year: 1980
  end-page: 27
  article-title: Gesticulation and speech: two aspects of the process of utterance
  publication-title: Relationship Verbal Nonverbal Commun
– year: 2020
  article-title: Statistics‐based motion synthesis for social conversations
  publication-title: Comput Graph Forum
– year: 2005
– volume: 7
  start-page: 90
  issue: 177
  year: 1972
  article-title: Some relationships between body motion and speech
  publication-title: Stud Dyadic Commun
– year: 2021
– volume: 57
  start-page: 331
  year: 2014
  end-page: 50
  article-title: Gesture synthesis adapted to speech emphasis
  publication-title: Speech Commun
– volume: 50
  start-page: 1
  year: 2020
  end-page: 10
  article-title: Audio‐driven robot upper‐body motion synthesis
  publication-title: IEEE Trans Cybern
– year: 2018
– year: Estoril, Portugal; 2008
– year: 2010
– volume: 23
  start-page: 506
  issue: 3
  year: 2004
  end-page: 13
  article-title: Speaking with hands: creating animated conversational characters from recordings of human performance
  publication-title: ACM Trans Graph
– volume: 27
  start-page: 484
  issue: 5
  year: 2016
  end-page: 500
  article-title: Assessing similarity models for human‐motion retrieval applications
  publication-title: Comput Animat Virtual Worlds
– volume: 7
  start-page: 190
  issue: 2
  year: 2016
  end-page: 202
  article-title: The geneva minimalistic acoustic parameter set (GeMAPS) for voice research and affective computing
  publication-title: IEEE Trans Affect Comput
– volume: 27
  start-page: 1
  issue: 1
  year: 2008
  end-page: 24
  article-title: Gesture modeling and animation based on a probabilistic re‐creation of speaker style
  publication-title: ACM Trans Graph
– year: 2008
– year: 2020
– volume: 39
  start-page: 53
  issue: 4
  year: 2020
  end-page: 1
  article-title: Learned motion matching
  publication-title: ACM Trans Graph
– start-page: 477
  year: 2001
  end-page: 86
  article-title: BEAT: the behavior expression animation toolkit
  publication-title: ACM Trans Graph
– year: 2020
  article-title: Adversarial gesture generation with realistic gesture phasing
  publication-title: Comput Graph
– year: 2020
  article-title: Style‐controllable speech‐driven gesture synthesis using normalising flows
  publication-title: Comput Graph Forum
– volume: 29
  start-page: 1
  issue: 4
  year: 2010
  end-page: 11
  article-title: Gesture controllers
  publication-title: ACM SIGGRAPH 2010 papers
– year: 2019
– year: 2015
– year: 2013
– ident: e_1_2_7_16_1
  doi: 10.1111/cgf.13946
– ident: e_1_2_7_10_1
  doi: 10.1111/cgf.14114
– ident: e_1_2_7_26_1
  doi: 10.1109/TAFFC.2015.2457417
– ident: e_1_2_7_32_1
  doi: 10.1007/978-3-642-15892-6_24
– ident: e_1_2_7_2_1
  doi: 10.1145/3383652.3423882
– ident: e_1_2_7_6_1
  doi: 10.1145/2485895.2485900
– ident: e_1_2_7_29_1
– volume: 29
  start-page: 1
  issue: 4
  year: 2010
  ident: e_1_2_7_17_1
  article-title: Gesture controllers
  publication-title: ACM SIGGRAPH 2010 papers
– ident: e_1_2_7_3_1
– ident: e_1_2_7_27_1
  doi: 10.1145/1873951.1874246
– ident: e_1_2_7_8_1
  doi: 10.1007/978-3-642-04380-2_12
– ident: e_1_2_7_9_1
  doi: 10.1016/j.specom.2013.06.005
– ident: e_1_2_7_28_1
– ident: e_1_2_7_5_1
– ident: e_1_2_7_21_1
  doi: 10.1145/2448196.2448199
– ident: e_1_2_7_31_1
  doi: 10.1145/3439795
– ident: e_1_2_7_24_1
– ident: e_1_2_7_12_1
  doi: 10.1145/3382507.3418815
– ident: e_1_2_7_18_1
  doi: 10.1145/1015706.1015753
– ident: e_1_2_7_14_1
  doi: 10.1109/CVPR.2019.00361
– ident: e_1_2_7_19_1
– ident: e_1_2_7_25_1
  doi: 10.1515/9783110813098.207
– ident: e_1_2_7_30_1
  doi: 10.1007/978-3-642-40415-3_33
– ident: e_1_2_7_11_1
  doi: 10.1145/3267851.3267898
– volume: 7
  start-page: 90
  issue: 177
  year: 1972
  ident: e_1_2_7_23_1
  article-title: Some relationships between body motion and speech
  publication-title: Stud Dyadic Commun
– ident: e_1_2_7_7_1
  doi: 10.1145/1330511.1330516
– ident: e_1_2_7_13_1
  doi: 10.1109/TCYB.2020.2966730
– ident: e_1_2_7_20_1
  doi: 10.1145/3386569.3392440
– ident: e_1_2_7_22_1
  doi: 10.1002/cav.1674
– start-page: 477
  year: 2001
  ident: e_1_2_7_4_1
  article-title: BEAT: the behavior expression animation toolkit
  publication-title: ACM Trans Graph
– ident: e_1_2_7_15_1
  doi: 10.1016/j.cag.2020.04.007
SSID ssj0026210
Score 2.48582
Snippet Co‐speech gestures are a vital ingredient in making virtual agents more human‐like and engaging. Automatically generated gestures based on speech‐input often...
SourceID proquest
crossref
wiley
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
SubjectTerms Acceleration
computer animation
conversational agents
expressive agents
gesture generation
Gesture recognition
Machine learning
Motion capture
motion matching
Parameters
perception
Sequences
Speech
Title ExpressGesture: Expressive gesture generation from speech through database matching
URI https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.2016
https://www.proquest.com/docview/2550228099
Volume 32
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1bS8MwFA66J33wLk7niCD61F2StF18G2Nz-CCicwx8KGkuKsoc6yrirzcnbTcVBfEpNJxAmpOcS_jyHYSOCTHMaMNtksOYx4RsekJK4knRMD4VSijjUL6XQf-WXYz8UY6qhLcwGT_E_MINToaz13DARZzUF6ShUrwCMgvYtgGqBfHQ9Zw5igQkIyLwWeBBllDwzjZIvRj41RMtwsvPQarzMr11dFfMLwOXPNXSWVyT79-oG__3AxtoLQ8-cTvbLZtoSY-30OrwMUmz3mQb3XTfHDT23E4yneoznH9bo4jvsz7bAlU1aBTD6xScTLSWDzgv-YMBdArOEdtg2CE1d9Cg1x10-l5eeMGTlNPAszlHs2VMyIR1XsrwhuCiqa36rDlgMW_5MQ2NCpWiSlMpubULokUU1WEYthSlu6g0fhnrPYQ14w1fhrwpWcyUlZKcWfUbFigBuVoZnRY6iGROSg61MZ6jjE6ZRHaVIlilMjqaS04yIo4fZCqFGqP8KCaRzZkc5w_nZXTi9PHr-KjTHkK7_1fBA7RCAOPibmUqqDSbpvrQBimzuIqWCbuquk35AdfQ5gE
linkProvider Wiley-Blackwell
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMw1V1LT9wwEB4hOBQOQF9iebSuRNtTYNdxkjVSD4jX8igH2CJuluMHINCCyC60_KT-lf6pzjjJAlUr9cKBUxRrIjme8Xhm9PkbgEXOvfDOS0xyhIiENq1IG8Mjo5s-ibXV1geU737a-SZ2jpPjEfhZ34Up-SGGBTfaGcFf0wangvTyPWuo0TcEzUorROWu-3GL-VrxZXsdlfuR882N7lonqloKRCaWcRphNN1qe58JjW7ZetnUUrccTgwNXeSyneRx5m1mbWxdbIxEi9dtbmOXZVnbUvET3f0Y9Q8nnv71gyFVFU95yXyQiDSitKQmum3y5Xqij4---3j2YVQcjrXNKfhVL0iJZjlfGvTzJXP3B1fk81ixaZisomu2Wm6HlzDieq9g4uisGJSjxWs43PgesL9buCiDa7fCqnf0-uykHMMncXGTyTK6fsOKK-fMKat6GjFC1dLpzzDaD1DUN9B9ip96C6O9y56bAeaEbCYmky0jcmFRykiB9u1FajUlow34XOtcmYp1nZp_XKiSL5orVIoipTTgw1DyqmQa-YvMfG02qvI1hcKkMJAaSdmAT0H___xera0e0XP2fwXfw4tO9-ue2tve352DcU6AnlCCmofR_vXALWBE1s_fha3AQD2xIf0GFIJCGQ
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3dSxwxEB9EQdoHUdvS8zOFfjwt7mWzuxfBB1GvWosc9Hr4FrL5UEHOw_X8-JP8L53J7p4KFvpyT2HD7BImk8nM7C-_AHzl3AvvvMQkR4hIaNOOtDE8Mjr2aaKttj6gfE-yw7_i12l6OgOPzVmYih9iUnCjlRH8NS3wkfVbz6ShRt8SMiurAZXH7uEO07Vy52gf5_Yb592D_t5hVN8oEJlEJlmEwXS7430uNHpl62WspW47HBfauShkJy2S3Nvc2sS6xBiJBq873CYuz_OOpdonevs5-rVI6DEuepPcLuMV8UEqsoiykobnNuZbzUBf73zP4ezLoDjsat1FWKjDUbZb2c8SzLjhMrwfXJTjqrf8AH8O7gNY9id-Znzttln9jG6SnVV92BJ5Nc0xo_MqrBw5Z85ZfQkQIxgqbZcMw-OA3fwI_Wko7hPMDq-G7jMwJ2Scmly2jSiERSkjBRqEF5nVlL214EejJWVqmnK6LeNSVQTLXKE-FemzBV8mkqOKmuMNmbVG0apenKXCLCqwAEnZgu9B-f98X-3tDqhd-V_BTZjv7XfV76OT41V4xwkAE0o2azB7cz126xjB3BQbwXYYqCnb6hOUagEr
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=ExpressGesture%3A+Expressive+gesture+generation+from+speech+through+database+matching&rft.jtitle=Computer+animation+and+virtual+worlds&rft.au=Ferstl%2C+Ylva&rft.au=Neff%2C+Michael&rft.au=McDonnell%2C+Rachel&rft.date=2021-06-01&rft.pub=Wiley+Subscription+Services%2C+Inc&rft.issn=1546-4261&rft.eissn=1546-427X&rft.volume=32&rft.issue=3-4&rft_id=info:doi/10.1002%2Fcav.2016&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1546-4261&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1546-4261&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1546-4261&client=summon