Deep recommendation with iteration directional adversarial training
Deep neural networks are vulnerable to attacks, posing significant security concerns across various applications, particularly in computer vision. Adversarial training has demonstrated effectiveness in improving the robustness of deep learning models by incorporating perturbations into the input spa...
Saved in:
Published in | Computing Vol. 106; no. 10; pp. 3151 - 3174 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
Vienna
Springer Vienna
01.10.2024
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Deep neural networks are vulnerable to attacks, posing significant security concerns across various applications, particularly in computer vision. Adversarial training has demonstrated effectiveness in improving the robustness of deep learning models by incorporating perturbations into the input space during training. Recently, adversarial training has been successfully applied to deep recommender systems. In these systems, user and item embeddings are perturbed through a minimax game, with constraints on perturbation directions, to enhance the model’s robustness and generalization. However, they still fail to defend against iterative attacks, which have shown an over 60% increase in effectiveness in the computer vision domain. Deep recommender systems may therefore be more susceptible to iterative attacks, which might lead to generalization failures. In this paper, we adapt iterative examples for deep recommender systems. Specifically, we propose a Deep Recommender with Iteration Directional Adversarial Training (DRIDAT) that combines attention mechanism and directional adversarial training for recommendations. Firstly, we establish a consumer-product collaborative attention to convey consumers different preferences on their interested products and the distinct preferences of different consumers on the same product they like. Secondly, we train the DRIDAT objective function using adversarial learning to minimize the impact of iterative attack. In addition, the maximum direction attack could push the embedding vector of input attacks towards instances with distinct labels. We mitigate this problem by implementing suitable constraints on the direction of the attack. Finally, we perform a series of evaluations on two prominent datasets. The findings show that our methodology outperforms all other methods for all metrics. |
---|---|
ISSN: | 0010-485X 1436-5057 |
DOI: | 10.1007/s00607-024-01326-6 |