A systematic survey on explainable AI applied to fake news detection

The exponential proliferation of fake news in recent years has emphasized the demand for automated fake news detection. Several techniques for detecting fake news have yielded encouraging results. However, these detection systems lack explainability i.e., providing the reason for their prediction. T...

Full description

Saved in:
Bibliographic Details
Published inEngineering applications of artificial intelligence Vol. 122; p. 106087
Main Authors A.B., Athira, Kumar, S.D. Madhu, Chacko, Anu Mary
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.06.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The exponential proliferation of fake news in recent years has emphasized the demand for automated fake news detection. Several techniques for detecting fake news have yielded encouraging results. However, these detection systems lack explainability i.e., providing the reason for their prediction. The critical advantage of explainability is the identification of bias and discrimination in detection algorithms. There are very few surveys conducted on the area of explainable AI applied to fake news detection. All of theses surveys summarize the existing methods in this area. Most of them are limited to the discussion of specific topics like datasets, evaluation methods, and potential future applications. In contrast, this survey looks at existing explainable AI methods and highlights the current state of the art in explainable fake news detection. We identify and enumerate a few open research problems based on our review of the existing explainable fake news detection techniques. We group the existing work in this area, by viewing it from four different perspectives: features used for the classification, explanation type, explainee type, and the metric used for explainability evaluation. The potential research topics in the above four groups which are unexplored so far and which need attention are also listed in this paper.
ISSN:0952-1976
1873-6769
DOI:10.1016/j.engappai.2023.106087