A Reinforcement Learning Framework for Explainable Recommendation
Explainable recommendation, which provides explanations about why an item is recommended, has attracted increasing attention due to its ability in helping users make better decisions and increasing users' trust in the system. Existing explainable recommendation methods either ignore the working...
Saved in:
Published in | Proceedings (IEEE International Conference on Data Mining) pp. 587 - 596 |
---|---|
Main Authors | , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.11.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Explainable recommendation, which provides explanations about why an item is recommended, has attracted increasing attention due to its ability in helping users make better decisions and increasing users' trust in the system. Existing explainable recommendation methods either ignore the working mechanism of the recommendation model or are designed for a specific recommendation model. Moreover, it is difficult for existing methods to ensure the presentation quality of the explanations (e.g., consistency). To solve these problems, we design a reinforcement learning framework for explainable recommendation. Our framework can explain any recommendation model (model-agnostic) and can flexibly control the explanation quality based on the application scenario. To demonstrate the effectiveness of our framework, we show how it can be used for generating sentence-level explanations. Specifically, we instantiate the explanation generator in the framework with a personalized-attention-based neural network. Offline experiments demonstrate that our method can well explain both collaborative filtering methods and deep-learning-based models. Evaluation with human subjects shows that the explanations generated by our method are significantly more useful than the explanations generated by the baselines. |
---|---|
ISSN: | 2374-8486 |
DOI: | 10.1109/ICDM.2018.00074 |