Sentence Generation from Triplets using Pre-trained LLM Model

Generating a meaningful and contextually relevant sentence from the triplet is challenging for various NLP generation tasks like knowledge graph population, question-answer system, and information extraction from knowledge graph. This study introduces a fine-tuned approach to a large language GPT-2...

Full description

Saved in:
Bibliographic Details
Published inProcedia computer science Vol. 260; pp. 424 - 431
Main Authors Nilesh, Pandey, Avinash Chandra, Gupta, Atul
Format Journal Article
LanguageEnglish
Published Elsevier B.V 2025
Subjects
Online AccessGet full text
ISSN1877-0509
1877-0509
DOI10.1016/j.procs.2025.03.219

Cover

More Information
Summary:Generating a meaningful and contextually relevant sentence from the triplet is challenging for various NLP generation tasks like knowledge graph population, question-answer system, and information extraction from knowledge graph. This study introduces a fine-tuned approach to a large language GPT-2 model to produce the sentence by providing the triplet (subject, predicate, and object). The proposed model has been evaluated using the benchmark REBEL Dataset for training and validation purposes. The proposed method achieved the training and validation loss of 0.1476 and 0.1244, respectively. Besides, the proposed model also attained a BERTScore precision of 0.6597, recall of 0.5891, and F1 score of 0.6223. These evaluation values show that the model can generate the sentence very well.
ISSN:1877-0509
1877-0509
DOI:10.1016/j.procs.2025.03.219