Multi-behavior-based graph contrastive learning recommendation
Graph-based collaborative filtering recommendations can more effectively explore the interaction information between users and items. However, its performance is still affected by the problems of data sparsity and low-quality representation learning. To address this, we propose a recommendation mode...
Saved in:
Published in | Knowledge and information systems Vol. 66; no. 6; pp. 3477 - 3496 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
London
Springer London
01.06.2024
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Graph-based collaborative filtering recommendations can more effectively explore the interaction information between users and items. However, its performance is still affected by the problems of data sparsity and low-quality representation learning. To address this, we propose a recommendation model named Multi-behavior-based Graph Contrastive Learning (MBGCL for short) Recommendation. Firstly, we leverage a graph convolutional network that can balance recommendation accuracy and novelty to learn multi-behavior data. We apply advanced MLP modules to enhance the nonlinearity of the representations obtained from graph convolutional network and integrate the learned multi-behavior representations. Secondly, we enhance representation capability and alleviate popularity bias through two contrastive learning auxiliary tasks. The multi-behavior contrastive learning task contrastively learns the target behavior and other auxiliary behavior subgraphs. The embedding-noise contrastive learning task aims to introduce noise into different behavior representations and generate augmented views for contrastive learning. Finally, we directly optimize the learning objectives by jointly training the graph collaborative filtering recommendation task with the contrastive learning auxiliary tasks. The empirical results on two real-world datasets validate the effectiveness of our model. Our model outperforms the SOTA baselines in terms of recommendation accuracy and novelty metrics. |
---|---|
ISSN: | 0219-1377 0219-3116 |
DOI: | 10.1007/s10115-024-02064-z |