METHOD, APPARATUS, AND TERMINAL FOR PROVIDING SIGN LANGUAGE VIDEO REFLECTING APPEARANCE OF CONVERSATION PARTNER

Disclosed is a method of providing a sign language video reflecting an appearance of a conversation partner. The method includes recognizing a speech language sentence from speech information, and recognizing an appearance image and a background image from video information. The method further compr...

Full description

Saved in:
Bibliographic Details
Main Authors JUNG, Hye Dong, KIM, Chang Jo, PARK, Han Mu, KO, Sang Ki
Format Patent
LanguageEnglish
Published 11.02.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Disclosed is a method of providing a sign language video reflecting an appearance of a conversation partner. The method includes recognizing a speech language sentence from speech information, and recognizing an appearance image and a background image from video information. The method further comprises acquiring multiple pieces of word-joint information corresponding to the speech language sentence from joint information database, sequentially inputting the word-joint information to a deep learning neural network to generate sentence-joint information, generating a motion model on the basis of the sentence-joint information, and generating a sign language video in which the background image and the appearance image are synthesized with the motion model. The method provides a natural communication environment between a sign language user and a speech language user.
Bibliography:Application Number: US201916536151