MULTIPLE DEVICE SENSOR INPUT BASED AVATAR
Examples are disclosed that relate to utilizing image sensor inputs from different devices having different perspectives in physical space to construct an avatar of a first user in a video stream. The avatar comprises a three-dimensional representation of at least a portion of a face of the first us...
Saved in:
Main Authors | , , , |
---|---|
Format | Patent |
Language | English French German |
Published |
27.09.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Examples are disclosed that relate to utilizing image sensor inputs from different devices having different perspectives in physical space to construct an avatar of a first user in a video stream. The avatar comprises a three-dimensional representation of at least a portion of a face of the first user texture mapped onto a three-dimensional body simulation that follows actual physical movement of the first user. The three-dimensional body simulation of the first user is generated based on image data received from an imaging device and image sensor data received from a head-mounted display device both associated with the first user. The three-dimensional representation of the face of the first user is generated based on the image data received from the imaging device. The resulting video stream is sent, via a communication network, to a display device associated with a second user. |
---|---|
Bibliography: | Application Number: EP20210798216 |