Learning to represent causality in recommender systems driven by large language models (LLMs)
Current recommender systems mainly rely on correlation-based models, which limit their ability to uncover true causal relationships between user preferences and item suggestions. In this paper, we propose a hybrid model that combines a Bayesian network with a large language model (LLM) to enhance bo...
Saved in:
Published in | Discover applied sciences Vol. 7; no. 9; pp. 960 - 27 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Cham
Springer International Publishing
01.09.2025
Springer Nature B.V Springer |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Current recommender systems mainly rely on correlation-based models, which limit their ability to uncover true causal relationships between user preferences and item suggestions. In this paper, we propose a hybrid model that combines a Bayesian network with a large language model (LLM) to enhance both the relevance and interpretability of recommendations. The Bayesian network captures causal dependencies among user-item interactions, while the LLM injects contextual semantics from user reviews and product descriptions. Our method was evaluated on a dataset of 1.2 million interactions and showed significant improvements over baseline models, with gains of 84.44% in precision, 88.37% in recall, and 89.36% in NDCG. A statistical t-test confirmed the significance of these improvements (
p
< 0.05). We further provide an error analysis and discuss the implications of using causal modeling for scalable, transparent, and GDPR-compliant recommender systems. Our results underscore the potential of causal representation learning to improve personalization and decision-making in recommender systems. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ISSN: | 3004-9261 2523-3963 3004-9261 2523-3971 |
DOI: | 10.1007/s42452-025-07551-8 |