Event-Aware Retrospective Learning for Knowledge-Based Image Captioning
External knowledge has been widely applied in image captioning tasks to enrich the generated sentences. However, existing methods retrieve knowledge by considering only semantic relevance while ignoring whether they are useful for captioning. For example, when querying "person" in external...
Saved in:
Published in | IEEE transactions on multimedia Vol. 26; pp. 4898 - 4911 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
Piscataway
IEEE
01.01.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | External knowledge has been widely applied in image captioning tasks to enrich the generated sentences. However, existing methods retrieve knowledge by considering only semantic relevance while ignoring whether they are useful for captioning. For example, when querying "person" in external knowledge, the most relevant concepts may be "wearing shirt" or "riding horse" statistically, which are not consistent with image contents and introduce noise to generated sentences. Intuitively, we humans can iteratively correlate visual clues with corresponding knowledge to distinguish useful clues from noise. Therefore, we propose an event-aware retrospective learning network for knowledge-based image captioning, which employs a retrospective validation mechanism on captioning models to align the retrieved knowledge with visual contents. This approach is an event-aware perspective and helps select useful knowledge that corresponds to visual facts. To better align images and knowledge, 1) we design an event-aware retrieval algorithm that clusters word-centered knowledge into triplet-centered knowledge (i.e., from "<inline-formula><tex-math notation="LaTeX"> < subject - predicate - object ></tex-math></inline-formula>" to "<inline-formula><tex-math notation="LaTeX">< triplet A> - edge - < triplet B ></tex-math></inline-formula>"), which provides an event context to facilitate knowledge retrieval and validation. 2) We revisit image contents to retrospectively validate retrieved knowledge by aligning the visual representation between knowledge and image. We summarize the visual characteristics of each knowledge event from the visual genome dataset to help learn which knowledge does not exist in the visual scene and should be discarded. 3) We adopt a dynamic knowledge fusion module that calibrates image and knowledge representations for sentence generation, which includes a knowledge-controlled gate unit that jointly calculates visual and semantic features in event-aware patterns. Compared to current knowledge-based captioning methods, the proposed network retrospectively learns the visual facts by event-aware retrieval and knowledge-image visual alignment, which regularizes the knowledge-incorporated captioning with visual evidence. Extensive experiments on the MS-COCO dataset demonstrate the effectiveness of our method. Ablation studies and visualization demonstrate the advantages of each component of the proposed model. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 1520-9210 1941-0077 |
DOI: | 10.1109/TMM.2023.3327537 |