XAI for Transparent Autonomous Vehicles: A New Approach to Understanding Decision-Making in Self-Driving Cars
While numerous advancements have been achieved in deep learning-based self-driving systems, the lack of transparency, interpretability, and user acceptance remains a significant challenge. Researchers argue that without the ability to explain decision-making behavior, deep learning models cannot be...
Saved in:
Published in | International eConference on Computer and Knowledge Engineering (Online) pp. 194 - 199 |
---|---|
Main Authors | , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
19.11.2024
|
Subjects | |
Online Access | Get full text |
ISSN | 2643-279X |
DOI | 10.1109/ICCKE65377.2024.10874778 |
Cover
Summary: | While numerous advancements have been achieved in deep learning-based self-driving systems, the lack of transparency, interpretability, and user acceptance remains a significant challenge. Researchers argue that without the ability to explain decision-making behavior, deep learning models cannot be practically implemented in various real-world scenarios. This is vital in decision-making networks, since inaccurate results may cause dangerous road incidents. To address this problem, we propose an innovative approach that integrates the Convolutional Block Attention Module with Deep Connected Attention (DCA-CBAM-ResNet50) with a state-of-the-art decision-maker and textual explainer model. This integration results in a more precise and comprehensive explainable system. To quantitatively analyze the performance of our model, we used the standard F1 score and applied it to the Berkeley DeepDrive-object-induced action (BDD-OIA) dataset. Our proposed technique outperforms the current SOTA model and demonstrates significant improvement in explainability. This research advances future explainable autonomous vehicles and contributes to creating more transparent and trustworthy self-driving systems. |
---|---|
ISSN: | 2643-279X |
DOI: | 10.1109/ICCKE65377.2024.10874778 |