Interactive decoding of words from visual speech recognition models

This work describes an interactive decoding method to improve the performance of visual speech recognition systems using user input to compensate for the inherent ambiguity of the task. Unlike most phoneme-to-word decoding pipelines, which produce phonemes and feed these through a finite state trans...

Full description

Saved in:
Bibliographic Details
Main Authors Shillingford, Brendan, Assael, Yannis, Denil, Misha
Format Journal Article
LanguageEnglish
Published 01.07.2021
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This work describes an interactive decoding method to improve the performance of visual speech recognition systems using user input to compensate for the inherent ambiguity of the task. Unlike most phoneme-to-word decoding pipelines, which produce phonemes and feed these through a finite state transducer, our method instead expands words in lockstep, facilitating the insertion of interaction points at each word position. Interaction points enable us to solicit input during decoding, allowing users to interactively direct the decoding process. We simulate the behavior of user input using an oracle to give an automated evaluation, and show promise for the use of this method for text input.
DOI:10.48550/arxiv.2107.00692