Trustworthy AI for Data Governance: Explaining Compliance Decisions Using SHAP and Causal Inference

Integrating new technologies into data governance calls for a deep look at how compliance, transparency, and user trust all work together in AI systems. Recent progress stresses that we need to clarify why AI makes the decisions it does, especially when it comes to following complex legal and ethica...

Full description

Saved in:
Bibliographic Details
Published inInternational Journal For Multidisciplinary Research Vol. 7; no. 4
Main Author Chintakindhi, Sai
Format Journal Article
LanguageEnglish
Published 19.07.2025
Online AccessGet full text
ISSN2582-2160
2582-2160
DOI10.36948/ijfmr.2025.v07i04.51762

Cover

Loading…
More Information
Summary:Integrating new technologies into data governance calls for a deep look at how compliance, transparency, and user trust all work together in AI systems. Recent progress stresses that we need to clarify why AI makes the decisions it does, especially when it comes to following complex legal and ethical rules. This study uses SHAP (SHapley Additive exPlanations), a method that comes from game theory and helps us understand model predictions by figuring out how much each feature adds to the outcome. By using SHAP along with causal inference, the goal is to make AI's decision-making clearer. It also highlights the rules set by things like the General Data Protection Regulation (GDPR) [1]. These kinds of rules say that automated decisions that affect people need to be clear, which gives us a standard for judging how well we can understand AI [2]. It’s really important to build trust in AI systems because they’re being used more and more in situations where compliance decisions have big effects. Explainability and accountability together are key to building trust with everyone involved. AI systems should not only follow data governance rules but also give explanations that make sense to users and regulators [3]. Causal inference is very important here, helping us look at not just correlations but the potential links between variables that influence decisions. This makes AI outputs easier to understand [4]. For example, using causal diagrams helps us see how specific features directly affect compliance results, which makes AI applications more trustworthy [5]. Moving from just looking at data statistically to thinking about it causally makes the conversation about AI’s reliability better and makes compliance stronger. This research leverages the SHAPs ability to provide localized explanations, to show how different features contribute to model predictions in compliance situations. This gives stakeholders insights that they can act on [6]. This effect of understandability and accountability working together is important for organizations that want to follow the rules while also building trust with their data subjects. SHAP and causal analyses together are a key method for reaching these goals, offering a way to include explainable AI models in regular governance practices [7][8]. The findings ultimately highlight how crucial it is to shift how we govern AI. We need transparent systems that prioritize involving stakeholders and following regulations. As organizations handle different datasets and compliance needs, the suggested framework aims to give a clear way to use AI technologies responsibly and ethically [9]. The study showcases how explainability and compliance can be smoothly combined in AI applications, demonstrated in the flowchart that maps out the data governance framework. This advances not just the theory of data governance but also has practical impacts for different industries. It prepares them to handle the complexities of using AI in areas where compliance is key [11][12]. By creating an environment that values trust and accountability, the study hopes to set new standards for responsible AI. This ensures that data governance changes along with technology and what society expects [13].
ISSN:2582-2160
2582-2160
DOI:10.36948/ijfmr.2025.v07i04.51762