MaskedMimic: Unified Physics-Based Character Control Through Masked Motion Inpainting

Crafting a single, versatile physics-based controller that can breathe life into interactive characters across a wide spectrum of scenarios represents an exciting frontier in character animation. An ideal controller should support diverse control modalities, such as sparse target keyframes, text ins...

Full description

Saved in:
Bibliographic Details
Published inACM transactions on graphics Vol. 43; no. 6; pp. 1 - 21
Main Authors Tessler, Chen, Guo, Yunrong, Nabati, Ofir, Chechik, Gal, Peng, Xue Bin
Format Journal Article
LanguageEnglish
Published New York, NY, USA ACM 19.12.2024
Subjects
Online AccessGet full text
ISSN0730-0301
1557-7368
DOI10.1145/3687951

Cover

Abstract Crafting a single, versatile physics-based controller that can breathe life into interactive characters across a wide spectrum of scenarios represents an exciting frontier in character animation. An ideal controller should support diverse control modalities, such as sparse target keyframes, text instructions, and scene information. While previous works have proposed physically simulated, scene-aware control models, these systems have predominantly focused on developing controllers that each specializes in a narrow set of tasks and control modalities. This work presents MaskedMimic, a novel approach that formulates physics-based character control as a general motion inpainting problem. Our key insight is to train a single unified model to synthesize motions from partial (masked) motion descriptions, such as masked keyframes, objects, text descriptions, or any combination thereof. This is achieved by leveraging motion tracking data and designing a scalable training method that can effectively utilize diverse motion descriptions to produce coherent animations. Through this process, our approach learns a physics-based controller that provides an intuitive control interface without requiring tedious reward engineering for all behaviors of interest. The resulting controller supports a wide range of control modalities and enables seamless transitions between disparate tasks. By unifying character control through motion inpainting, MaskedMimic creates versatile virtual characters. These characters can dynamically adapt to complex scenes and compose diverse motions on demand, enabling more interactive and immersive experiences.
AbstractList Crafting a single, versatile physics-based controller that can breathe life into interactive characters across a wide spectrum of scenarios represents an exciting frontier in character animation. An ideal controller should support diverse control modalities, such as sparse target keyframes, text instructions, and scene information. While previous works have proposed physically simulated, scene-aware control models, these systems have predominantly focused on developing controllers that each specializes in a narrow set of tasks and control modalities. This work presents MaskedMimic, a novel approach that formulates physics-based character control as a general motion inpainting problem. Our key insight is to train a single unified model to synthesize motions from partial (masked) motion descriptions, such as masked keyframes, objects, text descriptions, or any combination thereof. This is achieved by leveraging motion tracking data and designing a scalable training method that can effectively utilize diverse motion descriptions to produce coherent animations. Through this process, our approach learns a physics-based controller that provides an intuitive control interface without requiring tedious reward engineering for all behaviors of interest. The resulting controller supports a wide range of control modalities and enables seamless transitions between disparate tasks. By unifying character control through motion inpainting, MaskedMimic creates versatile virtual characters. These characters can dynamically adapt to complex scenes and compose diverse motions on demand, enabling more interactive and immersive experiences.
ArticleNumber 209
Author Tessler, Chen
Peng, Xue Bin
Guo, Yunrong
Nabati, Ofir
Chechik, Gal
Author_xml – sequence: 1
  givenname: Chen
  orcidid: 0000-0001-6447-9864
  surname: Tessler
  fullname: Tessler, Chen
  email: ctessler@nvidia.com
  organization: NVIDIA Research, Tel Aviv, Israel
– sequence: 2
  givenname: Yunrong
  orcidid: 0000-0001-7468-6162
  surname: Guo
  fullname: Guo, Yunrong
  email: kellyg@nvidia.com
  organization: NVIDIA, Santa Clara, United States of America
– sequence: 3
  givenname: Ofir
  orcidid: 0009-0008-1435-3399
  surname: Nabati
  fullname: Nabati, Ofir
  email: ofirnabati@gmail.com
  organization: NVIDIA Research, Tel Aviv, Israel
– sequence: 4
  givenname: Gal
  orcidid: 0000-0001-9164-5303
  surname: Chechik
  fullname: Chechik, Gal
  email: gchechik@nvidia.com
  organization: NVIDIA Research, Tel Aviv, Israel
– sequence: 5
  givenname: Xue Bin
  orcidid: 0000-0002-3677-5655
  surname: Peng
  fullname: Peng, Xue Bin
  email: japeng@nvidia.com
  organization: NVIDIA, Vancouver, Canada
BookMark eNo9kD1PwzAURS1UJNKC2Jm8MRns2k5sNoj4qNQIhmaObMdpDI1d2WHovyeohenq6J73hjsHMx-8BeCa4DtCGL-nuSgkJ2cgI5wXqJh4BjJcUIwwxeQCzFP6xBjnjOUZqCuVvmxbucGZB1h71znbwo_-kJxJ6EmlicpeRWVGG2EZ_BjDDm76GL63PTwewyqMLni48nvl_Oj89hKcd2qX7NUpF6B-ed6Ub2j9_roqH9dILakckWC8ZQVtKTWSYyaXUjHChNBKkAm0pDnRinGrJbNYtEZbzTSVlJuc5ILTBbg9_jUxpBRt1-yjG1Q8NAQ3v2s0pzUm8-ZoKjP8S3_lD8C_Wqs
Cites_doi 10.1109/ICCV48922.2021.01080
10.1145/1778765.1781157
10.1109/CVPR52729.2023.01322
10.1109/CVPR52733.2024.00075
10.1007/978-3-031-19772-7_1
10.1109/CVPR52688.2022.00509
10.1109/ICCV48922.2021.01148
10.1145/3610548.3618205
10.1109/ICCV51070.2023.01371
10.1145/3550082.3564186
10.1145/2508363.2508399
10.1145/3528223.3530067
10.1145/3618342
10.1080/10867651.1998.10487493
10.1145/1778765.1781155
10.1109/3DV62453.2024.00149
10.1145/3550454.3555434
10.1145/3588432.3591525
10.1109/CVPR46437.2021.01203
10.1145/3272127.3275108
10.1145/3588432.3591504
10.1145/3450626.3459670
10.1109/ICCV48922.2021.01118
10.1145/3528223.3530110
10.1145/3550469.3555391
10.1109/ICCV51070.2023.01354
10.1145/3618397
10.1109/ICCV51070.2023.01349
10.1145/3550469.3555411
10.1111/j.1467-8659.2008.01134.x
10.1109/ICCV51070.2023.01000
10.1109/CVPR52688.2022.01981
10.1145/3588432.3591541
10.1109/CVPR52729.2023.00054
10.1145/3197517.3201311
ContentType Journal Article
DBID AAYXX
CITATION
DOI 10.1145/3687951
DatabaseName CrossRef
DatabaseTitle CrossRef
DatabaseTitleList CrossRef

DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1557-7368
EndPage 21
ExternalDocumentID 10_1145_3687951
3687951
GroupedDBID --Z
-DZ
-~X
.DC
23M
2FS
4.4
5GY
5VS
6J9
85S
8US
AAKMM
AALFJ
AAYFX
ABPPZ
ACGFO
ACGOD
ACM
ADBCU
ADL
ADMLS
ADPZR
AEBYY
AENEX
AENSD
AETEA
AFWIH
AFWXC
AIKLT
AKRVB
ALMA_UNASSIGNED_HOLDINGS
ASPBG
AVWKF
BDXCO
CCLIF
CS3
F5P
FEDTE
GUFHI
HGAVV
I07
LHSKQ
P1C
P2P
PQQKQ
RNS
ROL
TWZ
UHB
UPT
WH7
XSW
ZCA
~02
AAYXX
AEFXT
AEJOY
CITATION
ID FETCH-LOGICAL-a239t-845d473d33c9504929a41488ba81929b9361ba45eb94e08dcbeb4b3935c616853
ISSN 0730-0301
IngestDate Thu Jul 03 08:39:19 EDT 2025
Mon Apr 21 16:40:30 EDT 2025
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed true
IsScholarly true
Issue 6
Keywords motion tracking
animated character control
motion capture data
reinforcement learning
Language English
License This work is licensed under a Creative Commons Attribution-NonCommercial International 4.0 License.
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-a239t-845d473d33c9504929a41488ba81929b9361ba45eb94e08dcbeb4b3935c616853
ORCID 0000-0002-3677-5655
0000-0001-9164-5303
0000-0001-6447-9864
0009-0008-1435-3399
0000-0001-7468-6162
OpenAccessLink https://dl.acm.org/doi/10.1145/3687951
PageCount 21
ParticipantIDs crossref_primary_10_1145_3687951
acm_primary_3687951
PublicationCentury 2000
PublicationDate 2024-12-19
PublicationDateYYYYMMDD 2024-12-19
PublicationDate_xml – month: 12
  year: 2024
  text: 2024-12-19
  day: 19
PublicationDecade 2020
PublicationPlace New York, NY, USA
PublicationPlace_xml – name: New York, NY, USA
PublicationTitle ACM transactions on graphics
PublicationTitleAbbrev ACM TOG
PublicationYear 2024
Publisher ACM
Publisher_xml – name: ACM
References Heyuan Yao, Zhenhua Song, Baoquan Chen, and Libin Liu. 2022. ControlVAE: Model-Based Learning of Generative Controllers for Physics-Based Characters. ACM Transactions on Graphics (TOG) 41, 6 (2022), 1--16.
Martin de Lasa, Igor Mordatch, and Aaron Hertzmann. 2010. Feature-Based Locomotion Controllers. ACM Trans. Graph. 29, 4, Article 131 (jul 2010), 10 pages. 10.1145/1778765.1781157
Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. 2022. Generating Diverse and Natural 3D Human Motions From Text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 5152--5161.
Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. 2023a. Human Motion Diffusion Model. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=SJ1kSyO2jwu
Zhiyang Dou, Xuelin Chen, Qingnan Fan, Taku Komura, and Wenping Wang. 2023. C· ASE: Learning conditional adversarial skill embeddings for physics-based characters. In SIGGRAPH Asia 2023 Conference Papers. 1--11.
Mathis Petrovich, Michael J. Black, and Gül Varol. 2021. Action-Conditioned 3D Human Motion Synthesis with Transformer VAE. In International Conference on Computer Vision (ICCV).
Zhengyi Luo, Jinkun Cao, Kris Kitani, Weipeng Xu, et al. 2023. Perpetual humanoid control for real-time simulated avatars. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 10895--10904.
Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel Van de Panne. 2018. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Transactions On Graphics (TOG) 37, 4 (2018), 1--14.
Davis Rempe, Zhengyi Luo, Xue Bin Peng, Ye Yuan, Kris Kitani, Karsten Kreis, Sanja Fidler, and Or Litany. 2023. Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13756--13766.
Jingbo Wang, Sijie Yan, Bo Dai, and Dahua Lin. 2021. Scene-aware generative network for human motion synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 12206--12215.
Libin Liu, KangKang Yin, Michiel van de Panne, Tianjia Shao, and Weiwei Xu. 2010. Sampling-based Contact-rich Motion Control. ACM Transctions on Graphics 29, 4 (2010), Article 128.
Kaifeng Zhao, Yan Zhang, Shaofei Wang, Thabo Beeler, and Siyu Tang. 2023. Synthesizing diverse human motions in 3d indoor scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14738--14749.
Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito, Jimei Yang, Yi Zhou, and Michael J Black. 2021. Stochastic scene-aware motion prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11374--11384.
Yinhuai Wang, Jing Lin, Ailing Zeng, Zhengyi Luo, Jian Zhang, and Lei Zhang. 2023. Physhoi: Physics-based imitation of dynamic human-object interaction. arXiv preprint arXiv:2312.04393 (2023).
Jungdam Won, Deepak Gopinath, and Jessica Hodgins. 2022. Physics-based character controllers using conditional VAEs. ACM Transactions on Graphics (TOG) 41, 4 (2022), 1--12.
Zeqi Xiao, Tai Wang, Jingbo Wang, Jinkun Cao, Wenwei Zhang, Bo Dai, Dahua Lin, and Jiangmiao Pang. 2024. Unified Human-Scene Interaction via Prompted Chain-of-Contacts. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=1vCnDyQkjg
Dongseok Yang, Doyeon Kim, and Sung-Hee Lee. 2021. Lobstr: Real-time lower-body pose prediction from sparse upper-body tracking signals. In Computer Graphics Forum, Vol. 40. Wiley Online Library, 265--275.
Zhengyi Luo, Shun Iwase, Ye Yuan, and Kris Kitani. 2022a. Embodied scene-aware human pose estimation. Advances in Neural Information Processing Systems 35 (2022), 6815--6828.
Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 627--635.
Xue Bin Peng, Yunrong Guo, Lina Halper, Sergey Levine, and Sanja Fidler. 2022. ASE: Large-scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters. ACM Trans. Graph. 41, 4, Article 94 (July 2022).
Andrea Dittadi, Sebastian Dziadzio, Darren Cosker, Ben Lundell, Thomas J Cashman, and Jamie Shotton. 2021. Full-body motion from a single head-mounted device: Generating smpl poses from partial observations. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11687--11697.
Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2016. beta-vae: Learning basic visual concepts with a constrained variational framework. In International conference on learning representations.
Yinghao Huang, Manuel Kaufmann, Emre Aksan, Michael J Black, Otmar Hilliges, and Gerard Pons-Moll. 2018. Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time. ACM Transactions on Graphics (TOG) 37, 6 (2018), 1--15.
Jingbo Wang, Zhengyi Luo, Ye Yuan, Yixuan Li, and Bo Dai. 2024a. PACER+: On-Demand Pedestrian Animation Controller in Driving Scenarios. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 718--728.
Yoonsang Lee, Sungeun Kim, and Jehee Lee. 2010a. Data-Driven Biped Control. ACM Trans. Graph. 29, 4, Article 129 (jul 2010), 8 pages. 10.1145/1778765.1781155
Jordan Juravsky, Yunrong Guo, Sanja Fidler, and Xue Bin Peng. 2024. SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation. In ACM SIGGRAPH 2024 Conference Papers. 1--11.
Alexander Winkler, Jungdam Won, and Yuting Ye. 2022. Questsim: Human motion tracking from sparse sensors with simulated avatars. In SIGGRAPH Asia 2022 Conference Papers. 1--8.
Jordan Juravsky, Yunrong Guo, Sanja Fidler, and Xue Bin Peng. 2022. PADL: Language-Directed Physics-Based Character Control. In SIGGRAPH Asia 2022 Conference Papers (Daegu, Republic of Korea) (SA '22). Association for Computing Machinery, New York, NY, USA, Article 19, 9 pages. 10.1145/3550469.3555391
Viktor Makoviychuk, Lukasz Wawrzyniak, Yunrong Guo, Michelle Lu, Kier Storey, Miles Macklin, David Hoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, and Gavriel State. 2021. Isaac Gym: High Performance GPU Based Physics Simulation For Robot Learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). https://openreview.net/forum?id=fgFBtYgJQX
Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. 2023b. Human Motion Diffusion Model. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=SJ1kSyO2jwu
Haotian Zhang, Ye Yuan, Viktor Makoviychuk, Yunrong Guo, Sanja Fidler, Xue Bin Peng, and Kayvon Fatahalian. 2023. Learning physically simulated tennis skills from broadcast videos. ACM Transactions On Graphics (TOG) 42, 4 (2023), 1--14.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2024b. Voyager: An Open-Ended Embodied Agent with Large Language Models. Transactions on Machine Learning Research (2024). https://openreview.net/forum?id=ehfRiF0R3a
Zhengyi Luo, Jinkun Cao, Josh Merel, Alexander Winkler, Jing Huang, Kris M. Kitani, and Weipeng Xu. 2024. Universal Humanoid Motion Representations for Physics-Based Control. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=OrOd8PxOO2
Chen Tessler, Yoni Kasten, Yunrong Guo, Shie Mannor, Gal Chechik, and Xue Bin Peng. 2023. Calm: Conditional adversarial latent models for directable virtual characters. In ACM SIGGRAPH 2023 Conference Proceedings. 1--9.
F Sebastian Grassia. 1998. Practical parameterization of rotations using the exponential map. Journal of graphics tools 3, 3 (1998), 29--48.
Tingwu Wang, Yunrong Guo, Maria Shugrina, and Sanja Fidler. 2020. Unicon: Universal neural controller for physics-based character motion. arXiv preprint arXiv:2011.15119 (2020).
Sirui Xu, Zhengyuan Li, Yu-Xiong Wang, and Liang-Yan Gui. 2023. Interdiff: Generating 3d human-object interactions with physics-informed diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14928--14940.
Ye Yuan and Kris Kitani. 2020. Residual force control for agile human behavior imitation and extended motion synthesis. Advances in Neural Information Processing Systems 33 (2020), 21763--21774.
Abhinanda R. Punnakkal, Arjun Chandrasekaran, Nikos Athanasiou, Alejandra Quiros-Ramirez, and Michael J. Black. 2021. BABEL: Bodies, Action and Behavior with English Labels. In Proceedings IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR). 722--731.
Yecheng Jason Ma, William Liang, Guanzhi Wang, De-An Huang, Osbert Bastani, Dinesh Jayaraman, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023. Eureka: Human-Level Reward Design via Coding Large Language Models. In The Twelfth International Conference on Learning Representations.
Yiming Xie, Varun Jampani, Lei Zhong, Deqing Sun, and Huaizu Jiang. 2024. Omni-Control: Control Any Joint at Any Time for Human Motion Generation. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=gd0lAEtWso
Yoonsang Lee, Sungeun Kim, and Jehee Lee. 2010b. Data-driven biped control. ACM Trans. Graph. 29, 4, Article 129 (jul 2010), 8 pages. 10.1145/1778765.1781155
Qingxu Zhu, He Zhang, Mengting Lan, and Lei Han. 2023. Neural Categorical Priors for Physics-Based Character Control. ACM Transactions on Graphics (TOG) 42, 6 (2023), 1--16.
Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, and Artsiom Sanakoyeu. 2023. Avatars grow legs: Generating smooth human motion from sparse tracki
Kingma Diederik P (e_1_2_1_16_1) 2014
e_1_2_1_60_1
e_1_2_1_41_1
e_1_2_1_45_1
Luo Zhengyi (e_1_2_1_23_1) 2021; 34
Luo Zhengyi (e_1_2_1_25_1) 2022
Xiao Zeqi (e_1_2_1_52_1) 2024
e_1_2_1_28_1
Yang Dongseok (e_1_2_1_55_1) 2021
e_1_2_1_47_1
Juravsky Jordan (e_1_2_1_15_1) 2024
Wang Yinhuai (e_1_2_1_49_1) 2023
Ross Stéphane (e_1_2_1_38_1) 2011
Schulman John (e_1_2_1_39_1) 2015
Wang Tingwu (e_1_2_1_48_1) 2020
Luo Zhengyi (e_1_2_1_22_1) 2024
Tevet Guy (e_1_2_1_42_1) 2023
e_1_2_1_31_1
e_1_2_1_54_1
e_1_2_1_8_1
e_1_2_1_56_1
e_1_2_1_6_1
e_1_2_1_35_1
e_1_2_1_50_1
e_1_2_1_4_1
e_1_2_1_10_1
e_1_2_1_33_1
Xie Yiming (e_1_2_1_53_1) 2024
e_1_2_1_14_1
e_1_2_1_37_1
e_1_2_1_58_1
e_1_2_1_18_1
Ma Yecheng Jason (e_1_2_1_26_1) 2023
Punnakkal Abhinanda R. (e_1_2_1_36_1)
Wang Guanzhi (e_1_2_1_44_1) 2024
Yuan Ye (e_1_2_1_57_1) 2020; 33
e_1_2_1_40_1
Tevet Guy (e_1_2_1_43_1) 2023
e_1_2_1_46_1
e_1_2_1_61_1
e_1_2_1_21_1
Mnih Volodymyr (e_1_2_1_29_1) 2016
Liu Libin (e_1_2_1_20_1) 2010
Mahmood Naureen (e_1_2_1_27_1)
Higgins Irina (e_1_2_1_12_1) 2016
e_1_2_1_7_1
e_1_2_1_30_1
e_1_2_1_5_1
e_1_2_1_3_1
Huang Yinghao (e_1_2_1_13_1) 2018; 37
e_1_2_1_34_1
e_1_2_1_51_1
e_1_2_1_1_1
e_1_2_1_11_1
e_1_2_1_32_1
Luo Zhengyi (e_1_2_1_24_1) 2022; 35
e_1_2_1_17_1
e_1_2_1_59_1
e_1_2_1_9_1
e_1_2_1_19_1
References_xml – reference: Naureen Mahmood, Nima Ghorbani, Nikolaus F. Troje, Gerard Pons-Moll, and Michael J. Black. 2019. AMASS: Archive of Motion Capture as Surface Shapes. In International Conference on Computer Vision. 5442--5451.
– reference: Yiming Xie, Varun Jampani, Lei Zhong, Deqing Sun, and Huaizu Jiang. 2024. Omni-Control: Control Any Joint at Any Time for Human Motion Generation. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=gd0lAEtWso
– reference: Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. 2022. Generating Diverse and Natural 3D Human Motions From Text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 5152--5161.
– reference: Chen Tessler, Yoni Kasten, Yunrong Guo, Shie Mannor, Gal Chechik, and Xue Bin Peng. 2023. Calm: Conditional adversarial latent models for directable virtual characters. In ACM SIGGRAPH 2023 Conference Proceedings. 1--9.
– reference: Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, and Artsiom Sanakoyeu. 2023. Avatars grow legs: Generating smooth human motion from sparse tracking inputs with diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 481--490.
– reference: Xue Bin Peng, Ze Ma, Pieter Abbeel, Sergey Levine, and Angjoo Kanazawa. 2021. Amp: Adversarial motion priors for stylized physics-based character control. ACM Transactions on Graphics (TOG) 40, 4 (2021), 1--20.
– reference: Zhengyi Luo, Ryo Hachiuma, Ye Yuan, and Kris Kitani. 2021. Dynamics-regulated kinematic policy for egocentric pose estimation. Advances in Neural Information Processing Systems 34 (2021), 25019--25032.
– reference: Yinghao Huang, Manuel Kaufmann, Emre Aksan, Michael J Black, Otmar Hilliges, and Gerard Pons-Moll. 2018. Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time. ACM Transactions on Graphics (TOG) 37, 6 (2018), 1--15.
– reference: Thomas Geijtenbeek, Michiel van de Panne, and A. Frank van der Stappen. 2013. Flexible Muscle-Based Locomotion for Bipedal Creatures. ACM Transactions on Graphics 32, 6 (2013).
– reference: Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2016. beta-vae: Learning basic visual concepts with a constrained variational framework. In International conference on learning representations.
– reference: Zhiyang Dou, Xuelin Chen, Qingnan Fan, Taku Komura, and Wenping Wang. 2023. C· ASE: Learning conditional adversarial skill embeddings for physics-based characters. In SIGGRAPH Asia 2023 Conference Papers. 1--11.
– reference: Jungdam Won, Deepak Gopinath, and Jessica Hodgins. 2022. Physics-based character controllers using conditional VAEs. ACM Transactions on Graphics (TOG) 41, 4 (2022), 1--12.
– reference: Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. 2023b. Human Motion Diffusion Model. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=SJ1kSyO2jwu
– reference: Mohamed Hassan, Yunrong Guo, Tingwu Wang, Michael Black, Sanja Fidler, and Xue Bin Peng. 2023. Synthesizing physical character-scene interactions. In ACM SIGGRAPH 2023 Conference Proceedings. 1--9.
– reference: Viktor Makoviychuk, Lukasz Wawrzyniak, Yunrong Guo, Michelle Lu, Kier Storey, Miles Macklin, David Hoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, and Gavriel State. 2021. Isaac Gym: High Performance GPU Based Physics Simulation For Robot Learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). https://openreview.net/forum?id=fgFBtYgJQX_
– reference: Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, and Haibin Ling. 2022. Expanding language-image pretrained models for general video recognition. In European Conference on Computer Vision. Springer, 1--18.
– reference: Xiaozheng Zheng, Zhuo Su, Chao Wen, Zhou Xue, and Xiaojie Jin. 2023. Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14678--14688.
– reference: Zhengyi Luo, Jinkun Cao, Josh Merel, Alexander Winkler, Jing Huang, Kris M. Kitani, and Weipeng Xu. 2024. Universal Humanoid Motion Representations for Physics-Based Control. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=OrOd8PxOO2
– reference: Jingbo Wang, Zhengyi Luo, Ye Yuan, Yixuan Li, and Bo Dai. 2024a. PACER+: On-Demand Pedestrian Animation Controller in Driving Scenarios. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 718--728.
– reference: Yoonsang Lee, Sungeun Kim, and Jehee Lee. 2010a. Data-Driven Biped Control. ACM Trans. Graph. 29, 4, Article 129 (jul 2010), 8 pages. 10.1145/1778765.1781155
– reference: Xue Bin Peng, Yunrong Guo, Lina Halper, Sergey Levine, and Sanja Fidler. 2022. ASE: Large-scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters. ACM Trans. Graph. 41, 4, Article 94 (July 2022).
– reference: Guy Tevet, Sigal Raab, Brian Gordon, Yoni Shafir, Daniel Cohen-or, and Amit Haim Bermano. 2023a. Human Motion Diffusion Model. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=SJ1kSyO2jwu
– reference: Mathis Petrovich, Michael J. Black, and Gül Varol. 2021. Action-Conditioned 3D Human Motion Synthesis with Transformer VAE. In International Conference on Computer Vision (ICCV).
– reference: Yinhuai Wang, Jing Lin, Ailing Zeng, Zhengyi Luo, Jian Zhang, and Lei Zhang. 2023. Physhoi: Physics-based imitation of dynamic human-object interaction. arXiv preprint arXiv:2312.04393 (2023).
– reference: Kaifeng Zhao, Yan Zhang, Shaofei Wang, Thabo Beeler, and Siyu Tang. 2023. Synthesizing diverse human motions in 3d indoor scenes. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14738--14749.
– reference: Yoonsang Lee, Sungeun Kim, and Jehee Lee. 2010b. Data-driven biped control. ACM Trans. Graph. 29, 4, Article 129 (jul 2010), 8 pages. 10.1145/1778765.1781155
– reference: Jingbo Wang, Yu Rong, Jingyuan Liu, Sijie Yan, Dahua Lin, and Bo Dai. 2022. Towards diverse and natural scene-aware 3d human motion synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20460--20469.
– reference: Andrea Dittadi, Sebastian Dziadzio, Darren Cosker, Ben Lundell, Thomas J Cashman, and Jamie Shotton. 2021. Full-body motion from a single head-mounted device: Generating smpl poses from partial observations. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11687--11697.
– reference: Jordan Juravsky, Yunrong Guo, Sanja Fidler, and Xue Bin Peng. 2022. PADL: Language-Directed Physics-Based Character Control. In SIGGRAPH Asia 2022 Conference Papers (Daegu, Republic of Korea) (SA '22). Association for Computing Machinery, New York, NY, USA, Article 19, 9 pages. 10.1145/3550469.3555391
– reference: Tingwu Wang, Yunrong Guo, Maria Shugrina, and Sanja Fidler. 2020. Unicon: Universal neural controller for physics-based character motion. arXiv preprint arXiv:2011.15119 (2020).
– reference: Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
– reference: Xue Bin Peng, Pieter Abbeel, Sergey Levine, and Michiel Van de Panne. 2018. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills. ACM Transactions On Graphics (TOG) 37, 4 (2018), 1--14.
– reference: Zhengyi Luo, Ye Yuan, and Kris M Kitani. 2022b. From Universal Humanoid Control to Automatic Physically Valid Character Creation. arXiv preprint arXiv:2206.09286 (2022).
– reference: Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 627--635.
– reference: Dongseok Yang, Doyeon Kim, and Sung-Hee Lee. 2021. Lobstr: Real-time lower-body pose prediction from sparse upper-body tracking signals. In Computer Graphics Forum, Vol. 40. Wiley Online Library, 265--275.
– reference: Jingbo Wang, Sijie Yan, Bo Dai, and Dahua Lin. 2021. Scene-aware generative network for human motion synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 12206--12215.
– reference: Yecheng Jason Ma, William Liang, Guanzhi Wang, De-An Huang, Osbert Bastani, Dinesh Jayaraman, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023. Eureka: Human-Level Reward Design via Coding Large Language Models. In The Twelfth International Conference on Learning Representations.
– reference: Heyuan Yao, Zhenhua Song, Baoquan Chen, and Libin Liu. 2022. ControlVAE: Model-Based Learning of Generative Controllers for Physics-Based Characters. ACM Transactions on Graphics (TOG) 41, 6 (2022), 1--16.
– reference: Mohamed Hassan, Duygu Ceylan, Ruben Villegas, Jun Saito, Jimei Yang, Yi Zhou, and Michael J Black. 2021. Stochastic scene-aware motion prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 11374--11384.
– reference: Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In International conference on machine learning. PMLR, 1928--1937.
– reference: Qingxu Zhu, He Zhang, Mengting Lan, and Lei Han. 2023. Neural Categorical Priors for Physics-Based Character Control. ACM Transactions on Graphics (TOG) 42, 6 (2023), 1--16.
– reference: Mohamad Diab, Julian Herrera, Bob Chernow, and Coco Mao. 2022. Stable Diffusion Prompt Book. Technical Report.
– reference: Liang Pan, Jingbo Wang, Buzhen Huang, Junyu Zhang, Haofan Wang, Xu Tang, and Yangang Wang. 2024. Synthesizing physically plausible human motions in 3d scenes. In 2024 International Conference on 3D Vision (3DV). IEEE, 1498--1507.
– reference: Sunmin Lee, Sebastian Starke, Yuting Ye, Jungdam Won, and Alexander Winkler. 2023. Questenvsim: Environment-aware simulated motion tracking from sparse sensors. In ACM SIGGRAPH 2023 Conference Proceedings. 1--9.
– reference: Zhengyi Luo, Shun Iwase, Ye Yuan, and Kris Kitani. 2022a. Embodied scene-aware human pose estimation. Advances in Neural Information Processing Systems 35 (2022), 6815--6828.
– reference: Alexander Winkler, Jungdam Won, and Yuting Ye. 2022. Questsim: Human motion tracking from sparse sensors with simulated avatars. In SIGGRAPH Asia 2022 Conference Papers. 1--8.
– reference: Zhengyi Luo, Jinkun Cao, Kris Kitani, Weipeng Xu, et al. 2023. Perpetual humanoid control for real-time simulated avatars. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 10895--10904.
– reference: Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2024b. Voyager: An Open-Ended Embodied Agent with Large Language Models. Transactions on Machine Learning Research (2024). https://openreview.net/forum?id=ehfRiF0R3a
– reference: Marco Silva, Yeuhi Abe, and Jovan Popovic. 2008. Simulation of Human Motion Data using Short-Horizon Model-Predictive Control. Computer Graphics Forum 27 (04 2008), 371 -- 380. 10.1111/j.1467-8659.2008.01134.x
– reference: Ye Yuan and Kris Kitani. 2020. Residual force control for agile human behavior imitation and extended motion synthesis. Advances in Neural Information Processing Systems 33 (2020), 21763--21774.
– reference: Martin de Lasa, Igor Mordatch, and Aaron Hertzmann. 2010. Feature-Based Locomotion Controllers. ACM Trans. Graph. 29, 4, Article 131 (jul 2010), 10 pages. 10.1145/1778765.1781157
– reference: John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438 (2015).
– reference: Abhinanda R. Punnakkal, Arjun Chandrasekaran, Nikos Athanasiou, Alejandra Quiros-Ramirez, and Michael J. Black. 2021. BABEL: Bodies, Action and Behavior with English Labels. In Proceedings IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR). 722--731.
– reference: Libin Liu, KangKang Yin, Michiel van de Panne, Tianjia Shao, and Weiwei Xu. 2010. Sampling-based Contact-rich Motion Control. ACM Transctions on Graphics 29, 4 (2010), Article 128.
– reference: Haotian Zhang, Ye Yuan, Viktor Makoviychuk, Yunrong Guo, Sanja Fidler, Xue Bin Peng, and Kayvon Fatahalian. 2023. Learning physically simulated tennis skills from broadcast videos. ACM Transactions On Graphics (TOG) 42, 4 (2023), 1--14.
– reference: Zeqi Xiao, Tai Wang, Jingbo Wang, Jinkun Cao, Wenwei Zhang, Bo Dai, Dahua Lin, and Jiangmiao Pang. 2024. Unified Human-Scene Interaction via Prompted Chain-of-Contacts. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=1vCnDyQkjg
– reference: Sirui Xu, Zhengyuan Li, Yu-Xiong Wang, and Liang-Yan Gui. 2023. Interdiff: Generating 3d human-object interactions with physics-informed diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 14928--14940.
– reference: Davis Rempe, Zhengyi Luo, Xue Bin Peng, Ye Yuan, Kris Kitani, Karsten Kreis, Sanja Fidler, and Or Litany. 2023. Trace and Pace: Controllable Pedestrian Animation via Guided Trajectory Diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13756--13766.
– reference: Jordan Juravsky, Yunrong Guo, Sanja Fidler, and Xue Bin Peng. 2024. SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation. In ACM SIGGRAPH 2024 Conference Papers. 1--11.
– reference: F Sebastian Grassia. 1998. Practical parameterization of rotations using the exponential map. Journal of graphics tools 3, 3 (1998), 29--48.
– reference: Deepak Gopinath, Hanbyul Joo, and Jungdam Won. 2022. Motion In-betweening for Physically Simulated Characters. In SIGGRAPH Asia 2022 Posters. 1--2.
– ident: e_1_2_1_35_1
  doi: 10.1109/ICCV48922.2021.01080
– ident: e_1_2_1_1_1
  doi: 10.1145/1778765.1781157
– ident: e_1_2_1_37_1
  doi: 10.1109/CVPR52729.2023.01322
– ident: e_1_2_1_45_1
  doi: 10.1109/CVPR52733.2024.00075
– volume-title: Human Motion Diffusion Model. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=SJ1kSyO2jwu
  year: 2023
  ident: e_1_2_1_43_1
– volume-title: Universal Humanoid Motion Representations for Physics-Based Control. In The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=OrOd8PxOO2
  year: 2024
  ident: e_1_2_1_22_1
– volume: 35
  start-page: 6815
  year: 2022
  ident: e_1_2_1_24_1
  article-title: Embodied scene-aware human pose estimation
  publication-title: Advances in Neural Information Processing Systems
– volume: 33
  start-page: 21763
  year: 2020
  ident: e_1_2_1_57_1
  article-title: Residual force control for agile human behavior imitation and extended motion synthesis
  publication-title: Advances in Neural Information Processing Systems
– volume-title: The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=gd0lAEtWso
  year: 2024
  ident: e_1_2_1_53_1
– ident: e_1_2_1_30_1
  doi: 10.1007/978-3-031-19772-7_1
– ident: e_1_2_1_9_1
  doi: 10.1109/CVPR52688.2022.00509
– ident: e_1_2_1_3_1
  doi: 10.1109/ICCV48922.2021.01148
– ident: e_1_2_1_4_1
  doi: 10.1145/3610548.3618205
– volume-title: Proceedings of the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 627--635
  year: 2011
  ident: e_1_2_1_38_1
– ident: e_1_2_1_54_1
  doi: 10.1109/ICCV51070.2023.01371
– ident: e_1_2_1_7_1
  doi: 10.1145/3550082.3564186
– volume-title: Unicon: Universal neural controller for physics-based character motion. arXiv preprint arXiv:2011.15119
  year: 2020
  ident: e_1_2_1_48_1
– ident: e_1_2_1_6_1
  doi: 10.1145/2508363.2508399
– volume-title: The Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=1vCnDyQkjg
  year: 2024
  ident: e_1_2_1_52_1
– ident: e_1_2_1_51_1
  doi: 10.1145/3528223.3530067
– volume-title: Sampling-based Contact-rich Motion Control. ACM Transctions on Graphics 29, 4
  year: 2010
  ident: e_1_2_1_20_1
– ident: e_1_2_1_58_1
  doi: 10.1145/3618342
– ident: e_1_2_1_8_1
  doi: 10.1080/10867651.1998.10487493
– ident: e_1_2_1_18_1
  doi: 10.1145/1778765.1781155
– ident: e_1_2_1_31_1
  doi: 10.1109/3DV62453.2024.00149
– volume-title: High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438
  year: 2015
  ident: e_1_2_1_39_1
– ident: e_1_2_1_56_1
  doi: 10.1145/3550454.3555434
– volume-title: Lobstr: Real-time lower-body pose prediction from sparse upper-body tracking signals. In Computer Graphics Forum
  year: 2021
  ident: e_1_2_1_55_1
– volume-title: International Conference on Computer Vision. 5442--5451
  ident: e_1_2_1_27_1
– volume-title: International conference on machine learning. PMLR
  year: 2016
  ident: e_1_2_1_29_1
– ident: e_1_2_1_11_1
  doi: 10.1145/3588432.3591525
– volume: 34
  start-page: 25019
  year: 2021
  ident: e_1_2_1_23_1
  article-title: Dynamics-regulated kinematic policy for egocentric pose estimation
  publication-title: Advances in Neural Information Processing Systems
– ident: e_1_2_1_47_1
  doi: 10.1109/CVPR46437.2021.01203
– volume-title: Physhoi: Physics-based imitation of dynamic human-object interaction. arXiv preprint arXiv:2312.04393
  year: 2023
  ident: e_1_2_1_49_1
– volume: 37
  start-page: 1
  year: 2018
  ident: e_1_2_1_13_1
  article-title: Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time
  publication-title: ACM Transactions on Graphics (TOG)
  doi: 10.1145/3272127.3275108
– ident: e_1_2_1_17_1
  doi: 10.1145/3588432.3591504
– ident: e_1_2_1_34_1
  doi: 10.1145/3450626.3459670
– volume-title: Proceedings IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR). 722--731
  ident: e_1_2_1_36_1
– ident: e_1_2_1_10_1
  doi: 10.1109/ICCV48922.2021.01118
– ident: e_1_2_1_33_1
  doi: 10.1145/3528223.3530110
– volume-title: International conference on learning representations.
  year: 2016
  ident: e_1_2_1_12_1
– ident: e_1_2_1_14_1
  doi: 10.1145/3550469.3555391
– ident: e_1_2_1_59_1
  doi: 10.1109/ICCV51070.2023.01354
– ident: e_1_2_1_61_1
  doi: 10.1145/3618397
– ident: e_1_2_1_19_1
  doi: 10.1145/1778765.1781155
– volume-title: Human Motion Diffusion Model. In The Eleventh International Conference on Learning Representations. https://openreview.net/forum?id=SJ1kSyO2jwu
  year: 2023
  ident: e_1_2_1_42_1
– volume-title: SuperPADL: Scaling Language-Directed Physics-Based Control with Progressive Supervised Distillation. In ACM SIGGRAPH 2024 Conference Papers. 1--11
  year: 2024
  ident: e_1_2_1_15_1
– volume-title: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980
  year: 2014
  ident: e_1_2_1_16_1
– ident: e_1_2_1_60_1
  doi: 10.1109/ICCV51070.2023.01349
– ident: e_1_2_1_50_1
  doi: 10.1145/3550469.3555411
– ident: e_1_2_1_40_1
  doi: 10.1111/j.1467-8659.2008.01134.x
– ident: e_1_2_1_21_1
  doi: 10.1109/ICCV51070.2023.01000
– ident: e_1_2_1_46_1
  doi: 10.1109/CVPR52688.2022.01981
– ident: e_1_2_1_41_1
  doi: 10.1145/3588432.3591541
– ident: e_1_2_1_5_1
  doi: 10.1109/CVPR52729.2023.00054
– volume-title: From Universal Humanoid Control to Automatic Physically Valid Character Creation. arXiv preprint arXiv:2206.09286
  year: 2022
  ident: e_1_2_1_25_1
– volume-title: The Twelfth International Conference on Learning Representations.
  year: 2023
  ident: e_1_2_1_26_1
– ident: e_1_2_1_28_1
– ident: e_1_2_1_32_1
  doi: 10.1145/3197517.3201311
– volume-title: Voyager: An Open-Ended Embodied Agent with Large Language Models. Transactions on Machine Learning Research
  year: 2024
  ident: e_1_2_1_44_1
SSID ssj0006446
Score 2.4718323
Snippet Crafting a single, versatile physics-based controller that can breathe life into interactive characters across a wide spectrum of scenarios represents an...
SourceID crossref
acm
SourceType Index Database
Publisher
StartPage 1
SubjectTerms Animation
Computer graphics
Computing methodologies
Physical simulation
Procedural animation
SubjectTermsDisplay Computing methodologies -- Computer graphics -- Animation -- Physical simulation
Computing methodologies -- Computer graphics -- Animation -- Procedural animation
Title MaskedMimic: Unified Physics-Based Character Control Through Masked Motion Inpainting
URI https://dl.acm.org/doi/10.1145/3687951
Volume 43
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LT9wwELYKXOBQtVDEo6186NV0N3Ye5ralLbRq4NBdCU7IdhwRIRYEy6W_vjP2JGtUDm0v0SrZsbKeb2c-2_Ng7AO4pEK1phLg7JzAOEKhq9aKkS59pZTJSo_ZyPVpcTJT38_z82Uob8guWdgD9-vZvJL_0SrcA71iluw_aHYYFG7AZ9AvXEHDcP0rHdfm4do3dXfT4Tshf2yRUYaoTvcgPoGHasKBeijJjNl9ISx9Sr15ojj8rQMGvs3vTBf6RqSEdXJUYxuJvqd4OFwIRa6TKPkpslVKKLxappYdP4Zt2AuMN6RBcdfZWBNDCM7abogMBjl31QXLfEy_mrYislDwMDF4EyoMHa0XmA6B663oaMi65qUoZeyj05vfWKWJYJba0nHilGMW9Z_mXmFlDBiw1FS29mntbHqywtayssRT_LXJ5_rHz8FVAxkMh9n9u8asahz2I4kiaXE3CWlJ2Mf0FXtJywY-iRh4zV74-SbbSIpJbrFZgoZDTljgT7DAByxwwgInLPAozCMW-BILb9js65fp0YmgphnCZFIvRKXyRpWykdLpHJZ_mTYKlryVNVj6Tlsti7E1KvdWKz-qGme9VRYTtF0xLoC8bbPV-e3c7zAO5K5qNZJS4JnAG02lpJQj32TtWFond9kmzM3lXSyLckkztst4P1fDo5j9nvdf2XtWcJ-tL0H1lq0u7h_9O2B8C_ue9PYbquxUFg
linkProvider EBSCOhost
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MaskedMimic%3A+Unified+Physics-Based+Character+Control+Through+Masked+Motion+Inpainting&rft.jtitle=ACM+transactions+on+graphics&rft.au=Tessler%2C+Chen&rft.au=Guo%2C+Yunrong&rft.au=Nabati%2C+Ofir&rft.au=Chechik%2C+Gal&rft.date=2024-12-19&rft.pub=ACM&rft.issn=0730-0301&rft.eissn=1557-7368&rft.volume=43&rft.issue=6&rft.spage=1&rft.epage=21&rft_id=info:doi/10.1145%2F3687951&rft.externalDocID=3687951
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0730-0301&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0730-0301&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0730-0301&client=summon