Neural Wikipedian: Generating Textual Summaries from Knowledge Base Triples
Most people need textual or visual interfaces in order to make sense of Semantic Web data. In this paper, we investigate the problem of generating natural language summaries for Semantic Web data using neural networks. Our end-to-end trainable architecture encodes the information from a set of tripl...
Saved in:
Published in | Web semantics Vol. 52-53; pp. 1 - 15 |
---|---|
Main Authors | , , , , , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
01.10.2018
Elsevier |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Most people need textual or visual interfaces in order to make sense of Semantic Web data. In this paper, we investigate the problem of generating natural language summaries for Semantic Web data using neural networks. Our end-to-end trainable architecture encodes the information from a set of triples into a vector of fixed dimensionality and generates a textual summary by conditioning the output on the encoded vector. We explore a set of different approaches that enable our models to verbalise entities from the input set of triples in the generated text. Our systems are trained and evaluated on two corpora of loosely aligned Wikipedia snippets with triples from DBpedia and Wikidata, with promising results. |
---|---|
ISSN: | 1570-8268 1873-7749 |
DOI: | 10.1016/j.websem.2018.07.002 |