Learnable Model Augmentation Contrastive Learning for Sequential Recommendation
Sequential Recommendation (SR) methods play a crucial role in recommender systems, which aims to capture users' dynamic interest from their historical interactions. Recently, Contrastive Learning (CL), which has emerged as a successful method for sequential recommendation, utilizes various data...
Saved in:
Published in | IEEE transactions on knowledge and data engineering Vol. 36; no. 8; pp. 3963 - 3976 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.08.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Sequential Recommendation (SR) methods play a crucial role in recommender systems, which aims to capture users' dynamic interest from their historical interactions. Recently, Contrastive Learning (CL), which has emerged as a successful method for sequential recommendation, utilizes various data augmentations to generate contrastive views to mine supervised signals from data to alleviate data sparsity issues. However, most existing sequential data augmentation methods may destroy semantic sequential interaction characteristics. Meanwhile, they often adopt random operations when generating contrastive views leading to suboptimal performance. To this end, in this paper, we propose a L earnable M odel A ugmentation Contrastive learning for sequential Rec ommendation (LMA4Rec) . Specifically, LMA4Rec first takes the model-based augmentation method to generate constructive views. Then, LMA4Rec uses Learnable Bernoulli Dropout (LBD) to implement learnable model augmentation operations. Next, contrastive learning is used between the contrastive views to extract supervised signals. Furthermore, a novel multi-positive contrastive learning loss alleviates the supervised sparsity issue. Finally, experiments on public datasets show that our LMA4Rec method effectively improved sequential recommendation performance compared with the state-of-the-art baseline methods. |
---|---|
ISSN: | 1041-4347 1558-2191 |
DOI: | 10.1109/TKDE.2023.3330426 |