GRAB: A Dataset of Whole-Body Human Grasping of Objects
Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting a...
Saved in:
Published in | arXiv.org |
---|---|
Main Authors | , , , |
Format | Paper Journal Article |
Language | English |
Published |
Ithaca
Cornell University Library, arXiv.org
25.08.2020
|
Subjects | |
Online Access | Get full text |
ISSN | 2331-8422 |
DOI | 10.48550/arxiv.2008.11200 |
Cover
Abstract | Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size. Given MoCap markers, we fit the full 3D body shape and pose, including the articulated face and hands, as well as the 3D object pose. This gives detailed 3D meshes over time, from which we compute contact between the body and object. This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task. We illustrate the practical value of GRAB with an example application; we train GrabNet, a conditional generative network, to predict 3D hand grasps for unseen 3D object shapes. The dataset and code are available for research purposes at https://grab.is.tue.mpg.de. |
---|---|
AbstractList | Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size. Given MoCap markers, we fit the full 3D body shape and pose, including the articulated face and hands, as well as the 3D object pose. This gives detailed 3D meshes over time, from which we compute contact between the body and object. This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task. We illustrate the practical value of GRAB with an example application; we train GrabNet, a conditional generative network, to predict 3D hand grasps for unseen 3D object shapes. The dataset and code are available for research purposes at https://grab.is.tue.mpg.de. Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size. Given MoCap markers, we fit the full 3D body shape and pose, including the articulated face and hands, as well as the 3D object pose. This gives detailed 3D meshes over time, from which we compute contact between the body and object. This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task. We illustrate the practical value of GRAB with an example application; we train GrabNet, a conditional generative network, to predict 3D hand grasps for unseen 3D object shapes. The dataset and code are available for research purposes at https://grab.is.tue.mpg.de. |
Author | Black, Michael J Tzionas, Dimitrios Ghorbani, Nima Taheri, Omid |
Author_xml | – sequence: 1 givenname: Omid surname: Taheri fullname: Taheri, Omid – sequence: 2 givenname: Nima surname: Ghorbani fullname: Ghorbani, Nima – sequence: 3 givenname: Michael surname: Black middlename: J fullname: Black, Michael J – sequence: 4 givenname: Dimitrios surname: Tzionas fullname: Tzionas, Dimitrios |
BackLink | https://doi.org/10.1007/978-3-030-58548-8_34$$DView published paper (Access to full text may be restricted) https://doi.org/10.48550/arXiv.2008.11200$$DView paper in arXiv |
BookMark | eNotj01Lw0AYhBdRsNb-AE8GPCfu9268pVVToVCQgsfwZnejCW02ZlOx_9609TSHGWbmuUGXrW8dQncEJ1wLgR-h_61_EoqxTggZ5QJNKGMk1pzSazQLocEYU6moEGyCVP6ezZ-iLHqGAYIbIl9FH19-6-K5t4doud9BG-U9hK5uP4_mumycGcItuqpgG9zsX6do8_qyWSzj1Tp_W2SrGATlMbMgqXCKy3HdCJLaVFCtyspZlWLgHKSxMi2BaOqkw8YacKxkKWZcE2bYFN2fa09URdfXO-gPxZGuONGNiYdzouv9996FoWj8vm_HTwXlTFFNpOTsD6lCUP8 |
ContentType | Paper Journal Article |
Copyright | 2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. http://arxiv.org/licenses/nonexclusive-distrib/1.0 |
Copyright_xml | – notice: 2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: http://arxiv.org/licenses/nonexclusive-distrib/1.0 |
DBID | 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PTHSS AKY GOX |
DOI | 10.48550/arxiv.2008.11200 |
DatabaseName | ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials - QC ProQuest Central Technology Collection ProQuest One ProQuest Central Korea SciTech Premium Collection ProQuest Engineering Collection Engineering Database ProQuest Central Premium ProQuest One Academic (New) Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering collection arXiv Computer Science arXiv.org |
DatabaseTitle | Publicly Available Content Database Engineering Database Technology Collection ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central ProQuest One Applied & Life Sciences ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest Central (New) ProQuest One Academic ProQuest One Academic (New) Engineering Collection |
DatabaseTitleList | Publicly Available Content Database |
Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository – sequence: 2 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Physics |
EISSN | 2331-8422 |
ExternalDocumentID | 2008_11200 |
Genre | Working Paper/Pre-Print |
GroupedDBID | 8FE 8FG ABJCF ABUWG AFKRA ALMA_UNASSIGNED_HOLDINGS AZQEC BENPR BGLVJ CCPQU DWQXO FRJ HCIFZ L6V M7S M~E PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PTHSS AKY GOX |
ID | FETCH-LOGICAL-a524-3da625e746842c519d95287bfed790a44a6cd69ba182e6e0cdcae3b39034813c3 |
IEDL.DBID | GOX |
IngestDate | Tue Jul 22 23:02:21 EDT 2025 Mon Jun 30 09:20:58 EDT 2025 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-a524-3da625e746842c519d95287bfed790a44a6cd69ba182e6e0cdcae3b39034813c3 |
Notes | SourceType-Working Papers-1 ObjectType-Working Paper/Pre-Print-1 content type line 50 |
OpenAccessLink | https://arxiv.org/abs/2008.11200 |
PQID | 2437281664 |
PQPubID | 2050157 |
ParticipantIDs | arxiv_primary_2008_11200 proquest_journals_2437281664 |
PublicationCentury | 2000 |
PublicationDate | 20200825 2020-08-25 |
PublicationDateYYYYMMDD | 2020-08-25 |
PublicationDate_xml | – month: 08 year: 2020 text: 20200825 day: 25 |
PublicationDecade | 2020 |
PublicationPlace | Ithaca |
PublicationPlace_xml | – name: Ithaca |
PublicationTitle | arXiv.org |
PublicationYear | 2020 |
Publisher | Cornell University Library, arXiv.org |
Publisher_xml | – name: Cornell University Library, arXiv.org |
SSID | ssj0002672553 |
Score | 1.7342622 |
SecondaryResourceType | preprint |
Snippet | Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact... Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact... |
SourceID | arxiv proquest |
SourceType | Open Access Repository Aggregation Database |
SubjectTerms | Computer Science - Computer Vision and Pattern Recognition Computer simulation Datasets Three dimensional bodies Three dimensional motion |
SummonAdditionalLinks | – databaseName: ProQuest Technology Collection dbid: 8FG link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LS8NAEF60RfDmk1ar5OB1aZLNZhsv0qptEXwgFXsrsy_w0rRJFP337mxSPQhes6dMNjPzffP4CLmwLAIhQ0MNcKRuYktBWE6NFVbwSIPx8m33D-n0Jbmb83lDuJVNW-XGJ3pHrXOFHHkfF-fFWORKrlZriqpRWF1tJDS2STtykQbv-WA8-eFY4lS4jJnVxUy_uqsPxefbR91C6TINnGtr-0d_XLGPL-M90n6ClSn2yZZZHpAd35apykMiJs_D0WUwDG6gcuGmCnIbvKKiLR3l-ivwDHwwKaDEqSc8fJTIq5RHZDa-nV1PaSN1QIHHCWUaHA4xIsGqmHJJlc64gzLSGi2yEJIEUqXTTIJDAyY1odLKGVGyLMQ5WqbYMWkt86XpkIC7DEJJqbhxyEowlSnA3cPaQmQGcQZd0vEvvFjV2yxqHUpviy7pbWywaG5yufi1-8n_x6dkN0YsGro_j_dIqyrezZkL2JU891_lG4cLkvI priority: 102 providerName: ProQuest |
Title | GRAB: A Dataset of Whole-Body Human Grasping of Objects |
URI | https://www.proquest.com/docview/2437281664 https://arxiv.org/abs/2008.11200 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1NT8JAEJ0AXrwYjRpQJHvw2ljabpd6A4USE8AQjNzI7FfiRQytRi_-dme3JR6Mlx7a7aGzu533ZnbeAFzbuI9ChiYwyF3oJrIBCssDY4UVvK_R-PZts3k6fUoe1nzdALavhcHd58tHpQ8sixt_1JEQQUikvBlFjlzli3WVnPRSXPX433GEMf2tP79W7y8mx3BUAz02rGbmBBrm9RREvhyObtmQ3WNJ7qNkW8ueXYfaYLTVX8xH1Fm-w8JVMbmHC-niJMUZrCbj1d00qFsXBMijJIg1Eq8wInFZLkUgSWecqIm0RossxCTBVOk0k0jo3qQmVFqRUWScha4uNlbxObSI_Zs2ME6IQEmpuCGmJGKVKXRawtpi3wyiDDvQ9h-8eavUKaq-kt4WHejubbCpV2axcQKEkUsWJhf_v3kJh5HjlSHtIt6FVrl7N1fkfEvZg-ZgkvfgYDSePy57fj7oOvse_wBU7YXh |
linkProvider | Cornell University |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT8JAEJ4gxOjNZ0BRe9BjQ2m7XWtiDMhTHhKCkVuz3d0mXihSfPCj_I_ubKkeTLxx7SZNO7M7-83zA7iMnCqjoSVNyQiGbuzIZDQipoxoRElVMKnp2wZDr_PkPkzJNAdfWS8MllVmNlEbahFzjJFXcHCejUku927-aiJrFGZXMwqNdFv05OpDuWzJbbeh9Htl263m5L5jrlkFTEZs13QEU5BfUhcTUFzhF-ET5TWEkRTUt5jrMo8Lzw-ZAt7SkxYXXH1v6PgWtqw63FGv3YKCiw2teSjUm8PR-CeoY3tUQXQnzZ7qWWEVtvh8eU9rNhW0wUa6gn70x_brC621B4URm8vFPuTk7AC2dR0oTw6Btse1-o1RMxpsqe63pRFHxjNS6Jr1WKwMHfI32guWYJsVLj6GGMhJjmCyCSkcQ34Wz2QRDKIgCw9DTqRy5ajDfc5w2LGIWFVe2z4rQVH_cDBPx2ekxJdaFiUoZzII1kcnCX4VffL_8gXsdCaDftDvDnunsGujI2ypY0_KkF8u3uSZQgvL8HytIwOCDe-Kb2wz0CU |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=GRAB%3A+A+Dataset+of+Whole-Body+Human+Grasping+of+Objects&rft.jtitle=arXiv.org&rft.au=Taheri%2C+Omid&rft.au=Ghorbani%2C+Nima&rft.au=Black%2C+Michael+J&rft.au=Tzionas%2C+Dimitrios&rft.date=2020-08-25&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422&rft_id=info:doi/10.48550%2Farxiv.2008.11200 |