A Data-driven Segmentation for the Shoulder Complex

The human shoulder complex is perhaps the most complicated joint in the human body being comprised of a set of three bones, muscles, tendons, and ligaments. Despite this anatomical complexity, computer graphics models for motion capture most often represent this joint as a simple ball and socket. In...

Full description

Saved in:
Bibliographic Details
Published inComputer graphics forum Vol. 29; no. 2; pp. 537 - 544
Main Authors Hong, Q Youn, Park, Sang Il, Hodgins, Jessica K.
Format Journal Article
LanguageEnglish
Published Oxford, UK Blackwell Publishing Ltd 01.05.2010
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The human shoulder complex is perhaps the most complicated joint in the human body being comprised of a set of three bones, muscles, tendons, and ligaments. Despite this anatomical complexity, computer graphics models for motion capture most often represent this joint as a simple ball and socket. In this paper, we present a method to determine a shoulder skeletal model that, when combined with standard skinning algorithms, generates a more visually pleasing animation that is a closer approximation to the actual skin deformations of the human body. We use a data‐driven approach and collect ground truth skin deformation data with an optical motion capture system with a large number of markers (200 markers on the shoulder complex alone). We cluster these markers during movement sequences and discover that adding one extra joint around the shoulder improves the resulting animation qualitatively and quantitatively yielding a marker set of approximately 70 markers for the complete skeleton. We demonstrate the effectiveness of our skeletal model by comparing it with ground truth data as well as with recorded video. We show its practicality by integrating it with the conventional rendering/animation pipeline.
Bibliography:ArticleID:CGF1623
ark:/67375/WNG-PXP9LK06-D
istex:990A7A9C586433A3D26A34F140D023019173B873
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0167-7055
1467-8659
DOI:10.1111/j.1467-8659.2009.01623.x