Sentence Generation from Triplets using Pre-trained LLM Model
Generating a meaningful and contextually relevant sentence from the triplet is challenging for various NLP generation tasks like knowledge graph population, question-answer system, and information extraction from knowledge graph. This study introduces a fine-tuned approach to a large language GPT-2...
Saved in:
Published in | Procedia computer science Vol. 260; pp. 424 - 431 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
2025
|
Subjects | |
Online Access | Get full text |
ISSN | 1877-0509 1877-0509 |
DOI | 10.1016/j.procs.2025.03.219 |
Cover
Summary: | Generating a meaningful and contextually relevant sentence from the triplet is challenging for various NLP generation tasks like knowledge graph population, question-answer system, and information extraction from knowledge graph. This study introduces a fine-tuned approach to a large language GPT-2 model to produce the sentence by providing the triplet (subject, predicate, and object). The proposed model has been evaluated using the benchmark REBEL Dataset for training and validation purposes. The proposed method achieved the training and validation loss of 0.1476 and 0.1244, respectively. Besides, the proposed model also attained a BERTScore precision of 0.6597, recall of 0.5891, and F1 score of 0.6223. These evaluation values show that the model can generate the sentence very well. |
---|---|
ISSN: | 1877-0509 1877-0509 |
DOI: | 10.1016/j.procs.2025.03.219 |