Geometry-Guided Dense Perspective Network for Speech-Driven Facial Animation
Realistic speech-driven 3D facial animation is a challenging problem due to the complex relationship between speech and face. In this paper, we propose a deep architecture, called Geometry-guided Dense Perspective Network (GDPnet) , to achieve speaker-independent realistic 3D facial animation. The e...
Saved in:
Published in | IEEE transactions on visualization and computer graphics Vol. 28; no. 12; pp. 4873 - 4886 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.12.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 1077-2626 1941-0506 1941-0506 |
DOI | 10.1109/TVCG.2021.3107669 |
Cover
Loading…
Abstract | Realistic speech-driven 3D facial animation is a challenging problem due to the complex relationship between speech and face. In this paper, we propose a deep architecture, called Geometry-guided Dense Perspective Network (GDPnet) , to achieve speaker-independent realistic 3D facial animation. The encoder is designed with dense connections to strengthen feature propagation and encourage the re-use of audio features, and the decoder is integrated with an attention mechanism to adaptively recalibrate point-wise feature responses by explicitly modeling interdependencies between different neuron units. We also introduce a non-linear face reconstruction representation as a guidance of latent space to obtain more accurate deformation, which helps solve the geometry-related deformation and is good for generalization across subjects. Huber and HSIC (Hilbert-Schmidt Independence Criterion) constraints are adopted to promote the robustness of our model and to better exploit the non-linear and high-order correlations. Experimental results on the public dataset and real scanned dataset validate the superiority of our proposed GDPnet compared with state-of-the-art model. The code is available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/GDPnet . |
---|---|
AbstractList | Realistic speech-driven 3D facial animation is a challenging problem due to the complex relationship between speech and face. In this paper, we propose a deep architecture, called Geometry-guided Dense Perspective Network (GDPnet) , to achieve speaker-independent realistic 3D facial animation. The encoder is designed with dense connections to strengthen feature propagation and encourage the re-use of audio features, and the decoder is integrated with an attention mechanism to adaptively recalibrate point-wise feature responses by explicitly modeling interdependencies between different neuron units. We also introduce a non-linear face reconstruction representation as a guidance of latent space to obtain more accurate deformation, which helps solve the geometry-related deformation and is good for generalization across subjects. Huber and HSIC (Hilbert-Schmidt Independence Criterion) constraints are adopted to promote the robustness of our model and to better exploit the non-linear and high-order correlations. Experimental results on the public dataset and real scanned dataset validate the superiority of our proposed GDPnet compared with state-of-the-art model. The code is available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/GDPnet . Realistic speech-driven 3D facial animation is a challenging problem due to the complex relationship between speech and face. In this paper, we propose a deep architecture, called Geometry-guided Dense Perspective Network (GDPnet), to achieve speaker-independent realistic 3D facial animation. The encoder is designed with dense connections to strengthen feature propagation and encourage the re-use of audio features, and the decoder is integrated with an attention mechanism to adaptively recalibrate point-wise feature responses by explicitly modeling interdependencies between different neuron units. We also introduce a non-linear face reconstruction representation as a guidance of latent space to obtain more accurate deformation, which helps solve the geometry-related deformation and is good for generalization across subjects. Huber and HSIC (Hilbert-Schmidt Independence Criterion) constraints are adopted to promote the robustness of our model and to better exploit the non-linear and high-order correlations. Experimental results on the public dataset and real scanned dataset validate the superiority of our proposed GDPnet compared with state-of-the-art model. The code is available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/GDPnet.Realistic speech-driven 3D facial animation is a challenging problem due to the complex relationship between speech and face. In this paper, we propose a deep architecture, called Geometry-guided Dense Perspective Network (GDPnet), to achieve speaker-independent realistic 3D facial animation. The encoder is designed with dense connections to strengthen feature propagation and encourage the re-use of audio features, and the decoder is integrated with an attention mechanism to adaptively recalibrate point-wise feature responses by explicitly modeling interdependencies between different neuron units. We also introduce a non-linear face reconstruction representation as a guidance of latent space to obtain more accurate deformation, which helps solve the geometry-related deformation and is good for generalization across subjects. Huber and HSIC (Hilbert-Schmidt Independence Criterion) constraints are adopted to promote the robustness of our model and to better exploit the non-linear and high-order correlations. Experimental results on the public dataset and real scanned dataset validate the superiority of our proposed GDPnet compared with state-of-the-art model. The code is available for research purposes at http://cic.tju.edu.cn/faculty/likun/projects/GDPnet. |
Author | Lai, Yu-Kun Li, Kun Liu, Yunke Hui, Binyuan Liu, Yebin Zhang, Yuxiang Yang, Jingyu Liu, Jingying |
Author_xml | – sequence: 1 givenname: Jingying surname: Liu fullname: Liu, Jingying email: 981132775@qq.com organization: College of Intelligence and Computing, Tianjin University, Tianjin, China – sequence: 2 givenname: Binyuan surname: Hui fullname: Hui, Binyuan email: huybery@gmail.com organization: College of Intelligence and Computing, Tianjin University, Tianjin, China – sequence: 3 givenname: Kun orcidid: 0000-0003-2326-0166 surname: Li fullname: Li, Kun email: lik@tju.edu.cn organization: College of Intelligence and Computing, Tianjin University, Tianjin, China – sequence: 4 givenname: Yunke surname: Liu fullname: Liu, Yunke email: 787782917@qq.com organization: College of Intelligence and Computing, Tianjin University, Tianjin, China – sequence: 5 givenname: Yu-Kun orcidid: 0000-0002-2094-5680 surname: Lai fullname: Lai, Yu-Kun email: Yukun.Lai@cs.cardiff.ac.uk organization: School of Computer Science and Informatics, Cardiff University, Cardiff, U.K – sequence: 6 givenname: Yuxiang surname: Zhang fullname: Zhang, Yuxiang email: yx-z19@mails.tsinghua.edu.cn organization: Department of Automation, Tsinghua University, Beijing, China – sequence: 7 givenname: Yebin orcidid: 0000-0003-3215-0225 surname: Liu fullname: Liu, Yebin email: liuyebin@mail.tsinghua.edu.cn organization: Department of Automation, Tsinghua University, Beijing, China – sequence: 8 givenname: Jingyu orcidid: 0000-0002-7521-7920 surname: Yang fullname: Yang, Jingyu email: yjy@tju.edu.cn organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China |
BookMark | eNp9kEtPGzEURq2KqjzKD6jYjMSmm0nv9XO8RKGESlFBKmVrTTx3VMNkHOxJK_49ToO6YNGVr-Tz3cc5ZgdjHImxTwgzRLBf7u7nixkHjjOBYLS279gRWok1KNAHpQZjaq65PmTHOT8AoJSN_cAOhZTSCgtHbLmguKYpPdeLbeioqy5pzFTdUsob8lP4TdV3mv7E9Fj1MVU_NkT-V32ZysdYXbU-tEN1MYZ1O4U4fmTv-3bIdPr6nrCfV1_v5tf18mbxbX6xrL3geqo73vWSENsGhOQr8n2PZXmjFZfYIGoUoFSvpAVpQGAnWttzL9Gv9GrFlThhn_d9Nyk-bSlPbh2yp2FoR4rb7LjSGsooowt6_gZ9iNs0lu0cN7xRBhprCmX2lE8x50S982H6e9KU2jA4BLcT7nbC3U64exVekvgmuUnFRnr-b-ZsnwlE9I-35XqplXgBjHeJoA |
CODEN | ITVGEA |
CitedBy_id | crossref_primary_10_1109_LSP_2024_3356415 crossref_primary_10_1109_TVCG_2024_3371064 crossref_primary_10_1109_TCSVT_2024_3386836 crossref_primary_10_1007_s40747_024_01481_5 crossref_primary_10_1145_3676165 crossref_primary_10_1109_ACCESS_2024_3440335 crossref_primary_10_3390_electronics12234788 crossref_primary_10_1007_s00500_023_09292_5 crossref_primary_10_1016_j_vrih_2023_08_006 crossref_primary_10_1109_TPAMI_2024_3376710 |
Cites_doi | 10.1007/978-3-030-01219-9_43 10.1145/2070781.2024163 10.1109/CVPR.2019.01034 10.1145/2929464.2929475 10.1145/3382507.3418815 10.1109/CVPR.2018.00766 10.1109/TVCG.2006.90 10.1109/CVPR.2019.01223 10.1145/971478.971488 10.21437/Interspeech.2016-483 10.1145/2010324.1964972 10.1145/2897824.2925984 10.1109/TPAMI.2019.2913372 10.1109/CVPR.2015.7298657 10.1007/11564089_7 10.1007/s11042-014-2156-2 10.1155/2009/191940 10.1007/978-3-642-04898-2_616 10.1109/CVPR.2016.90 10.1145/3242969.3243017 10.1109/TNN.2002.1021892 10.1145/3072959.3073658 10.1609/aaai.v34i04.5950 10.1109/TVCG.2007.22 10.1109/CVPRW.2017.287 10.1109/ICCV.2011.6126510 10.1145/1095878.1095881 10.1111/cgf.13830 10.1214/aoms/1177703732 10.1609/aaai.v33i01.33015525 10.1145/2816795.2818122 10.1145/3072959.3073640 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
DOI | 10.1109/TVCG.2021.3107669 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE/IET Electronic Library CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore Digital Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1941-0506 |
EndPage | 4886 |
ExternalDocumentID | 10_1109_TVCG_2021_3107669 9524465 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 62171317; 62122058; 61771339 funderid: 10.13039/501100001809 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD F5P HZ~ IEDLZ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P PQQKQ RIA RIE RNS TN5 AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
ID | FETCH-LOGICAL-c326t-d2df4e11a80342becff1669765241811613055f549047031d3a9f2c41cb6bb253 |
IEDL.DBID | RIE |
ISSN | 1077-2626 1941-0506 |
IngestDate | Thu Jul 10 17:52:32 EDT 2025 Mon Jun 30 04:48:22 EDT 2025 Tue Jul 01 03:58:58 EDT 2025 Thu Apr 24 23:00:58 EDT 2025 Wed Aug 27 02:29:15 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 12 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c326t-d2df4e11a80342becff1669765241811613055f549047031d3a9f2c41cb6bb253 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0003-3215-0225 0000-0002-7521-7920 0000-0003-2326-0166 0000-0002-2094-5680 |
PMID | 34449390 |
PQID | 2728570897 |
PQPubID | 75741 |
PageCount | 14 |
ParticipantIDs | proquest_journals_2728570897 crossref_citationtrail_10_1109_TVCG_2021_3107669 ieee_primary_9524465 crossref_primary_10_1109_TVCG_2021_3107669 proquest_miscellaneous_2566032676 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-12-01 |
PublicationDateYYYYMMDD | 2022-12-01 |
PublicationDate_xml | – month: 12 year: 2022 text: 2022-12-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on visualization and computer graphics |
PublicationTitleAbbrev | TVCG |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref35 ref13 ref15 ref14 ref31 ref30 hannun (ref12) 2014 ref33 ref11 ref32 ref2 ref39 ref17 karras (ref20) 2017; 36 ref38 ref19 ref18 zhou (ref43) 2018; 37 kim (ref21) 2018; 37 abadi (ref1) 2016 kingma (ref22) 2014 edwards (ref7) 2016; 35 fisher (ref8) 1986 taylor (ref36) 2017; 36 ref24 ref23 ref26 ref42 ref41 ref28 ref27 ref29 taylor (ref37) 2012 ref9 ref4 ref3 ref6 suwajanakorn (ref34) 2017; 36 glorot (ref10) 2011 ref5 ref40 li (ref25) 2017; 36 huang (ref16) 2016 |
References_xml | – ident: ref31 doi: 10.1007/978-3-030-01219-9_43 – ident: ref9 doi: 10.1145/2070781.2024163 – ident: ref5 doi: 10.1109/CVPR.2019.01034 – ident: ref38 doi: 10.1145/2929464.2929475 – ident: ref23 doi: 10.1145/3382507.3418815 – ident: ref40 doi: 10.1109/CVPR.2018.00766 – ident: ref42 doi: 10.1109/TVCG.2006.90 – start-page: 93 year: 1986 ident: ref8 article-title: The DARPA speech recognition research database: Specifications and status publication-title: Proc DARPA Workshop Speech Recognize – ident: ref18 doi: 10.1109/CVPR.2019.01223 – ident: ref19 doi: 10.1145/971478.971488 – ident: ref35 doi: 10.21437/Interspeech.2016-483 – start-page: 2261 year: 2016 ident: ref16 article-title: Densely connected convolutional networks publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – ident: ref39 doi: 10.1145/2010324.1964972 – volume: 35 start-page: 1 year: 2016 ident: ref7 article-title: JALI: An animator-centric viseme model for expressive lip synchronization publication-title: ACM Trans Graph doi: 10.1145/2897824.2925984 – ident: ref15 doi: 10.1109/TPAMI.2019.2913372 – ident: ref2 doi: 10.1109/CVPR.2015.7298657 – year: 2014 ident: ref22 article-title: Adam: A method for stochastic optimization – volume: 37 start-page: 1 year: 2018 ident: ref43 article-title: Visemenet: Audio-driven animator-centric speech animation publication-title: ACM Trans Graph – ident: ref11 doi: 10.1007/11564089_7 – ident: ref6 doi: 10.1007/s11042-014-2156-2 – ident: ref33 doi: 10.1155/2009/191940 – ident: ref32 doi: 10.1007/978-3-642-04898-2_616 – year: 2016 ident: ref1 article-title: TensorFlow: Large-scale machine learning on heterogeneous distributed systems – ident: ref13 doi: 10.1109/CVPR.2016.90 – ident: ref30 doi: 10.1145/3242969.3243017 – ident: ref14 doi: 10.1109/TNN.2002.1021892 – start-page: 315 year: 2011 ident: ref10 article-title: Deep sparse rectifier neural networks publication-title: Proc 14th Int Conf Artif Intell Statist – year: 2014 ident: ref12 article-title: Deep speech: Scaling up end-to-end speech recognition – volume: 36 start-page: 1 year: 2017 ident: ref20 article-title: Audio-driven facial animation by joint end-to-end learning of pose and emotion publication-title: ACM Trans Graph doi: 10.1145/3072959.3073658 – ident: ref27 doi: 10.1609/aaai.v34i04.5950 – volume: 36 start-page: 1 year: 2017 ident: ref25 article-title: Learning a model of facial shape and expression from 4D scans publication-title: ACM Trans Graph – ident: ref28 doi: 10.1109/TVCG.2007.22 – start-page: 275 year: 2012 ident: ref37 article-title: Dynamic units of visual speech publication-title: Proc ACM SIGGRAPH/Eurographics Symp Comput Animation – ident: ref29 doi: 10.1109/CVPRW.2017.287 – ident: ref4 doi: 10.1109/ICCV.2011.6126510 – volume: 37 start-page: 1 year: 2018 ident: ref21 article-title: Deep video portraits publication-title: ACM Trans Graph – ident: ref3 doi: 10.1145/1095878.1095881 – ident: ref24 doi: 10.1111/cgf.13830 – volume: 36 start-page: 1 year: 2017 ident: ref36 article-title: A deep learning approach for generalized speech animation publication-title: ACM Trans Graph – ident: ref17 doi: 10.1214/aoms/1177703732 – ident: ref41 doi: 10.1609/aaai.v33i01.33015525 – ident: ref26 doi: 10.1145/2816795.2818122 – volume: 36 start-page: 1 year: 2017 ident: ref34 article-title: Synthesizing obama: Learning lip sync from audio publication-title: ACM Trans Graph doi: 10.1145/3072959.3073640 |
SSID | ssj0014489 |
Score | 2.475106 |
Snippet | Realistic speech-driven 3D facial animation is a challenging problem due to the complex relationship between speech and face. In this paper, we propose a deep... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 4873 |
SubjectTerms | 3D facial animation Animation Coders Correlation Datasets Decoding Deformation Face recognition Facial animation Geometry geometry-guided Image reconstruction Solid modeling speaker-independent Speech Speech-driven Three-dimensional displays |
Title | Geometry-Guided Dense Perspective Network for Speech-Driven Facial Animation |
URI | https://ieeexplore.ieee.org/document/9524465 https://www.proquest.com/docview/2728570897 https://www.proquest.com/docview/2566032676 |
Volume | 28 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwELbanuBAgYLYUpCROCG8tR0_1seqZbdCtEKiRb1FdjwWFW22WpID_HrGSTZUtELcItmJnBk_5vPMfEPIW4gmKjCR-RAQoMyCZ0F6zqyuNLboyLuqJSen5vhcfbzQFxvk_ZgLAwBd8BlM82Pny4_Lqs1XZftOy8zvtUk2Ebj1uVqjxwBhhuvjCy2TaKUPHkzB3f7Z18MFIkEpEKAibDeZKbRQSrki78S3jqOuvsqdTbk7aebb5GQ9xj7A5Pu0bcK0-vUXfeP__sRj8mgwOelBP0eekA2on5KHt4gId8inBSyvoVn9ZIv2MkKkR4hugX7-k4lJT_t4cYpGLv1yA1B9Y0ervFXSuc_X7vSgvuzTIJ-R8_mHs8NjNtRZYBUabw2LMiYFQvhZ5gNEpaYkUErW4EjRABAZYmidEElylenuY-FdkpUSVTAhSF08J1v1soYXhOqCQ4RkQXuhvFXOuGSTSTo4zZWwE8LX4i6rgYQ818K4Kjswwl2ZlVVmZZWDsibk3fjKTc_A8a_OO1niY8dB2BOyt9ZpOazRH6W0MrP7zxyO6s3YjKsru0x8DcsW-6C1y1FI1uze_-WX5IHMCRFdgMse2WpWLbxCM6UJr7v5-RuEFuAi |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3db9MwED-N8QA8jI8x0TEgSDwh0jmO7dSP00ZboK2Q6NDeIjs-iwlIpy552P76nZM0TIAQb5HsRM6dP-7nu_sdwBt0yglULjbWEkAZWRNbblicyUJSi3SsqVoyX6jpqfh4Js-24F2fC4OITfAZDsNj48t3q6IOV2WHWvLA73UH7sqQjNtma_U-AwIauo0wzGJOdnrnw0yYPlx-PZ4QFuQJQVQC7ipwhaZCCJ2GvfjWgdRUWPljW27OmvFDmG9G2YaYfB_WlR0W178ROP7vbzyCnc7ojI7aWfIYtrB8Ag9uURHuwmyCq59Yra_iSX3u0EUnhG8x-vwrFzNatBHjEZm50ZcLxOJbfLIOm2U0NuHiPToqz9tEyKdwOn6_PJ7GXaWFuCDzrYodd15gkphRYAQktXqfkJQyRSMlEyAJIENKT1iSiUB471KjPS9EUlhlLZfpHmyXqxKfQSRThg59htIkwmRCK-0zr7y0WjKRZANgG3HnRUdDHqph_MgbOMJ0HpSVB2XlnbIG8LZ_5aLl4PhX590g8b5jJ-wBHGx0mner9DLnGQ_8_iNNo3rdN9P6Ck4TU-Kqpj5k7zISUqb2__7lV3BvupzP8tmHxafncJ-H9Igm3OUAtqt1jS_IaKnsy2au3gBfo-Nq |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Geometry-Guided+Dense+Perspective+Network+for+Speech-Driven+Facial+Animation&rft.jtitle=IEEE+transactions+on+visualization+and+computer+graphics&rft.au=Liu%2C+Jingying&rft.au=Hui%2C+Binyuan&rft.au=Li%2C+Kun&rft.au=Liu%2C+Yunke&rft.date=2022-12-01&rft.pub=IEEE&rft.issn=1077-2626&rft.volume=28&rft.issue=12&rft.spage=4873&rft.epage=4886&rft_id=info:doi/10.1109%2FTVCG.2021.3107669&rft_id=info%3Apmid%2F34449390&rft.externalDocID=9524465 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1077-2626&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1077-2626&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1077-2626&client=summon |