HUMBI: A Large Multiview Dataset of Human Body Expressions and Benchmark Challenge
This paper presents a new large multiview dataset called HUMBI for human body expressions with natural clothing. The goal of HUMBI is to facilitate modeling view-specific appearance and geometry of five primary body signals including gaze, face, hand, body, and garment from assorted people. 107 sync...
Saved in:
Main Authors | , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
30.09.2021
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | This paper presents a new large multiview dataset called HUMBI for human body
expressions with natural clothing. The goal of HUMBI is to facilitate modeling
view-specific appearance and geometry of five primary body signals including
gaze, face, hand, body, and garment from assorted people. 107 synchronized HD
cameras are used to capture 772 distinctive subjects across gender, ethnicity,
age, and style. With the multiview image streams, we reconstruct high fidelity
body expressions using 3D mesh models, which allows representing view-specific
appearance. We demonstrate that HUMBI is highly effective in learning and
reconstructing a complete human model and is complementary to the existing
datasets of human body expressions with limited views and subjects such as
MPII-Gaze, Multi-PIE, Human3.6M, and Panoptic Studio datasets. Based on HUMBI,
we formulate a new benchmark challenge of a pose-guided appearance rendering
task that aims to substantially extend photorealism in modeling diverse human
expressions in 3D, which is the key enabling factor of authentic social
tele-presence. HUMBI is publicly available at http://humbi-data.net |
---|---|
DOI: | 10.48550/arxiv.2110.00119 |