Symmetric Graph Contrastive Learning against Noisy Views for Recommendation

Graph Contrastive Learning (GCL) leverages data augmentation techniques to produce contrasting views, enhancing the accuracy of recommendation systems through learning the consistency between contrastive views. However, existing augmentation methods, such as directly perturbing interaction graph (e....

Full description

Saved in:
Bibliographic Details
Main Authors Zhao, Chu, Yang, Enneng, Liang, Yuliang, Zhao, Jianzhe, Guo, Guibing, Wang, Xingwei
Format Journal Article
LanguageEnglish
Published 03.08.2024
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Graph Contrastive Learning (GCL) leverages data augmentation techniques to produce contrasting views, enhancing the accuracy of recommendation systems through learning the consistency between contrastive views. However, existing augmentation methods, such as directly perturbing interaction graph (e.g., node/edge dropout), may interfere with the original connections and generate poor contrasting views, resulting in sub-optimal performance. In this paper, we define the views that share only a small amount of information with the original graph due to poor data augmentation as noisy views (i.e., the last 20% of the views with a cosine similarity value less than 0.1 to the original view). We demonstrate through detailed experiments that noisy views will significantly degrade recommendation performance. Further, we propose a model-agnostic Symmetric Graph Contrastive Learning (SGCL) method with theoretical guarantees to address this issue. Specifically, we introduce symmetry theory into graph contrastive learning, based on which we propose a symmetric form and contrast loss resistant to noisy interference. We provide theoretical proof that our proposed SGCL method has a high tolerance to noisy views. Further demonstration is given by conducting extensive experiments on three real-world datasets. The experimental results demonstrate that our approach substantially increases recommendation accuracy, with relative improvements reaching as high as 12.25% over nine other competing models. These results highlight the efficacy of our method.
AbstractList Graph Contrastive Learning (GCL) leverages data augmentation techniques to produce contrasting views, enhancing the accuracy of recommendation systems through learning the consistency between contrastive views. However, existing augmentation methods, such as directly perturbing interaction graph (e.g., node/edge dropout), may interfere with the original connections and generate poor contrasting views, resulting in sub-optimal performance. In this paper, we define the views that share only a small amount of information with the original graph due to poor data augmentation as noisy views (i.e., the last 20% of the views with a cosine similarity value less than 0.1 to the original view). We demonstrate through detailed experiments that noisy views will significantly degrade recommendation performance. Further, we propose a model-agnostic Symmetric Graph Contrastive Learning (SGCL) method with theoretical guarantees to address this issue. Specifically, we introduce symmetry theory into graph contrastive learning, based on which we propose a symmetric form and contrast loss resistant to noisy interference. We provide theoretical proof that our proposed SGCL method has a high tolerance to noisy views. Further demonstration is given by conducting extensive experiments on three real-world datasets. The experimental results demonstrate that our approach substantially increases recommendation accuracy, with relative improvements reaching as high as 12.25% over nine other competing models. These results highlight the efficacy of our method.
Author Yang, Enneng
Liang, Yuliang
Zhao, Jianzhe
Guo, Guibing
Wang, Xingwei
Zhao, Chu
Author_xml – sequence: 1
  givenname: Chu
  surname: Zhao
  fullname: Zhao, Chu
– sequence: 2
  givenname: Enneng
  surname: Yang
  fullname: Yang, Enneng
– sequence: 3
  givenname: Yuliang
  surname: Liang
  fullname: Liang, Yuliang
– sequence: 4
  givenname: Jianzhe
  surname: Zhao
  fullname: Zhao, Jianzhe
– sequence: 5
  givenname: Guibing
  surname: Guo
  fullname: Guo, Guibing
– sequence: 6
  givenname: Xingwei
  surname: Wang
  fullname: Wang, Xingwei
BackLink https://doi.org/10.48550/arXiv.2408.02691$$DView paper in arXiv
BookMark eNqFzbsOgkAQheEttPD2AFbOC4iAYLAmXhKNhRpbMsEFJ5FZMrtBeXsvsbc6zX_y9VWHDWulxoHvRUkc-zOUJzVeGPmJ54eLZdBTu1NbVdoJ5bARrG-QGnaC1lGjYa9RmLgELJHYOjgYsi1cSD8sFEbgqHPzvvMVHRkeqm6Bd6tHvx2oyXp1TrfTL5vVQhVKm3347MvP_xcvQXE9EQ
ContentType Journal Article
Copyright http://creativecommons.org/licenses/by/4.0
Copyright_xml – notice: http://creativecommons.org/licenses/by/4.0
DBID AKY
GOX
DOI 10.48550/arxiv.2408.02691
DatabaseName arXiv Computer Science
arXiv.org
DatabaseTitleList
Database_xml – sequence: 1
  dbid: GOX
  name: arXiv.org
  url: http://arxiv.org/find
  sourceTypes: Open Access Repository
DeliveryMethod fulltext_linktorsrc
ExternalDocumentID 2408_02691
GroupedDBID AKY
GOX
ID FETCH-arxiv_primary_2408_026913
IEDL.DBID GOX
IngestDate Thu Aug 08 12:20:23 EDT 2024
IsDoiOpenAccess true
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-arxiv_primary_2408_026913
OpenAccessLink https://arxiv.org/abs/2408.02691
ParticipantIDs arxiv_primary_2408_02691
PublicationCentury 2000
PublicationDate 2024-08-03
PublicationDateYYYYMMDD 2024-08-03
PublicationDate_xml – month: 08
  year: 2024
  text: 2024-08-03
  day: 03
PublicationDecade 2020
PublicationYear 2024
Score 3.8486483
SecondaryResourceType preprint
Snippet Graph Contrastive Learning (GCL) leverages data augmentation techniques to produce contrasting views, enhancing the accuracy of recommendation systems through...
SourceID arxiv
SourceType Open Access Repository
SubjectTerms Computer Science - Artificial Intelligence
Computer Science - Information Retrieval
Computer Science - Learning
Title Symmetric Graph Contrastive Learning against Noisy Views for Recommendation
URI https://arxiv.org/abs/2408.02691
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV07T8NADLZKJxYEKqi8PbAGepcmaUeEaCuQysBD2SLf9VJ1KKAkVPTfY1-C2qXrPS2fZPvT2Z8Bbkyu2ATOJOFJmaDPPjmgQU4B6ZgRLllF1mf5TuPJe_8pjdIW4H8tDBW_i1XND2zKO-HfumWUIOXpe1pLytb4Ja0_Jz0VV7N-s45jTD-05SRGh3DQRHd4Xz_HEbTcZweeX9fLpTSusjgWemgURqiCSjE02PCbzpHmjNHLCqdfi3KNH0IRihxQouBD3t70PjqG69Hj28Mk8Ndn3zVXRCaSZV6y8ATajOhdFzDumUjP8mESOWL1WI7RbKJ0HiqduHDYO4XurlPOdk-dw75mj-uz08ILaFfFj7tkj1mZK6-2Pw9pcFk
link.rule.ids 228,230,786,891
linkProvider Cornell University
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Symmetric+Graph+Contrastive+Learning+against+Noisy+Views+for+Recommendation&rft.au=Zhao%2C+Chu&rft.au=Yang%2C+Enneng&rft.au=Liang%2C+Yuliang&rft.au=Zhao%2C+Jianzhe&rft.date=2024-08-03&rft_id=info:doi/10.48550%2Farxiv.2408.02691&rft.externalDocID=2408_02691