Speech driven realistic mouth animation based on multi-modal unit selection

This paper presents a novel audio visual diviseme (viseme pair) instance selection and concatenation method for speech driven photo realistic mouth animation. Firstly, an audio visual diviseme database is built consisting of the audio feature sequences, intensity sequences and visual feature sequenc...

Full description

Saved in:
Bibliographic Details
Published inJournal on multimodal user interfaces Vol. 2; no. 3-4; pp. 157 - 169
Main Authors Jiang, Dongmei, Ravyse, Ilse, Sahli, Hichem, Verhelst, Werner
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer-Verlag 01.12.2008
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper presents a novel audio visual diviseme (viseme pair) instance selection and concatenation method for speech driven photo realistic mouth animation. Firstly, an audio visual diviseme database is built consisting of the audio feature sequences, intensity sequences and visual feature sequences of the instances. In the Viterbi based diviseme instance selection, we set the accumulative cost as the weighted sum of three items: 1) logarithm of concatenation smoothness of the synthesized mouth trajectory; 2) logarithm of the pronunciation distance; 3) logarithm of the audio intensity distance between the candidate diviseme instance and the target diviseme segment in the incoming speech. The selected diviseme instances are time warped and blended to construct the mouth animation. Objective and subjective evaluations on the synthesized mouth animations prove that the multimodal diviseme instance selection algorithm proposed in this paper outperforms the triphone unit selection algorithm in Video Rewrite. Clear, accurate, smooth mouth animations can be obtained matching well with the pronunciation and intensity changes in the incoming speech. Moreover, with the logarithm function in the accumulative cost, it is easy to set the weights to obtain optimal mouth animations.
ISSN:1783-7677
1783-8738
DOI:10.1007/s12193-009-0015-7