MULTIPLE DEVICE SENSOR INPUT BASED AVATAR

Examples are disclosed that relate to utilizing image sensor inputs from different devices having different perspectives in physical space to construct an avatar of a first user in a video stream. The avatar comprises a three-dimensional representation of at least a portion of a face of the first us...

Full description

Saved in:
Bibliographic Details
Main Authors JAKUBZAK, Kenneth Mitchell, LEE, Austin S, KWOK, Alton, LAMB, Mathew J
Format Patent
LanguageEnglish
French
German
Published 27.09.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Examples are disclosed that relate to utilizing image sensor inputs from different devices having different perspectives in physical space to construct an avatar of a first user in a video stream. The avatar comprises a three-dimensional representation of at least a portion of a face of the first user texture mapped onto a three-dimensional body simulation that follows actual physical movement of the first user. The three-dimensional body simulation of the first user is generated based on image data received from an imaging device and image sensor data received from a head-mounted display device both associated with the first user. The three-dimensional representation of the face of the first user is generated based on the image data received from the imaging device. The resulting video stream is sent, via a communication network, to a display device associated with a second user.
Bibliography:Application Number: EP20210798216