Reading Between the Lines: Exploring Infilling in Visual Narratives
Generating long form narratives such as stories and procedures from multiple modalities has been a long standing dream for artificial intelligence. In this regard, there is often crucial subtext that is derived from the surrounding contexts. The general seq2seq training methods render the models sho...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
26.10.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Generating long form narratives such as stories and procedures from multiple
modalities has been a long standing dream for artificial intelligence. In this
regard, there is often crucial subtext that is derived from the surrounding
contexts. The general seq2seq training methods render the models shorthanded
while attempting to bridge the gap between these neighbouring contexts. In this
paper, we tackle this problem by using \textit{infilling} techniques involving
prediction of missing steps in a narrative while generating textual
descriptions from a sequence of images. We also present a new large scale
\textit{visual procedure telling} (ViPT) dataset with a total of 46,200
procedures and around 340k pairwise images and textual descriptions that is
rich in such contextual dependencies. Generating steps using infilling
technique demonstrates the effectiveness in visual procedures with more
coherent texts. We conclusively show a METEOR score of 27.51 on procedures
which is higher than the state-of-the-art on visual storytelling. We also
demonstrate the effects of interposing new text with missing images during
inference. The code and the dataset will be publicly available at
https://visual-narratives.github.io/Visual-Narratives/. |
---|---|
DOI: | 10.48550/arxiv.2010.13944 |