Simultaneous neural machine translation with a reinforced attention mechanism

To translate in real time, a simultaneous translation system should determine when to stop reading source tokens and generate target tokens corresponding to a partial source sentence read up to that point. However, conventional attention‐based neural machine translation (NMT) models cannot produce t...

Full description

Saved in:
Bibliographic Details
Published inETRI journal Vol. 43; no. 5; pp. 775 - 786
Main Authors Lee, YoHan, Shin, JongHun, Kim, YoungKil
Format Journal Article
LanguageEnglish
Published Electronics and Telecommunications Research Institute (ETRI) 01.10.2021
한국전자통신연구원
Subjects
Online AccessGet full text
ISSN1225-6463
2233-7326
DOI10.4218/etrij.2020-0358

Cover

Loading…
More Information
Summary:To translate in real time, a simultaneous translation system should determine when to stop reading source tokens and generate target tokens corresponding to a partial source sentence read up to that point. However, conventional attention‐based neural machine translation (NMT) models cannot produce translations with adequate latency in online scenarios because they wait until a source sentence is completed to compute alignment between the source and target tokens. To address this issue, we propose a reinforced learning (RL)‐based attention mechanism, the reinforced attention mechanism, which allows a neural translation model to jointly train the stopping criterion and a partial translation model. The proposed attention mechanism comprises two modules, one to ensure translation quality and the other to address latency. Different from previous RL‐based simultaneous translation systems, which learn the stopping criterion from a fixed NMT model, the modules can be trained jointly with a novel reward function. In our experiments, the proposed model has better translation quality and comparable latency compared to previous models.
Bibliography:Funding information
Institute for Information & communications Technology Promotion (IITP), Grant/Award Number: R7119‐16‐1001
https://doi.org/10.4218/etrij.2020-0358
ISSN:1225-6463
2233-7326
DOI:10.4218/etrij.2020-0358