Decoding accountability: the importance of explainability in liability frameworks for smart border systems

This paper examines the challenges posed by Automated Decision-Making systems (ADMs) in border control, focusing on the limitations of the proposed AI Liability Directive (AILD)—now withdrawn– in addressing potential harms. We identify key issues within the AILD, including the plausibility requireme...

Full description

Saved in:
Bibliographic Details
Published inDiscover Computing Vol. 28; no. 1; p. 64
Main Authors Nnawuchi, Uchenna, George, Carlisle
Format Journal Article
LanguageEnglish
Published Dordrecht Springer Netherlands 04.05.2025
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN2948-2992
1386-4564
2948-2992
1573-7659
DOI10.1007/s10791-025-09559-5

Cover

More Information
Summary:This paper examines the challenges posed by Automated Decision-Making systems (ADMs) in border control, focusing on the limitations of the proposed AI Liability Directive (AILD)—now withdrawn– in addressing potential harms. We identify key issues within the AILD, including the plausibility requirement, knowledge paradox, and the exclusion of human-in-the-loop, which create significant barriers for claimants seeking redress. Although now withdrawn, the commission is contemplating putting up a new proposal for the AI Liability regime; if the new proposal is anything like the AILD (now withdrawn), there is a need to address the substantial shortcomings discovered in the AILD. To address these shortcomings, we propose integrating sui generis explainability requirements into the AILD framework or mandatory compliance with Article 86 of the Artificial Intelligence Act (AIA), notwithstanding its ineffectiveness. This approach aims to bridge knowledge and liability gaps, empower claimants, and enhance transparency in AI decision-making processes. Our recommendations include expanding the disclosure requirements to incorporate a sui generis explainability requirement, implementing a tiered plausibility standard, and introducing regulatory sandboxes. These measures seek to engender accountability and fairness. With the refinement of the AILD in mind, these considerations aim to influence and make recommendations for any future proposals for an AI liability regime and to foster a regulatory environment that encourages both the development and use of AI technologies to be responsible and accountable, ensuring that AI-driven or smart border control systems enhance security and efficiency while upholding fundamental rights and human dignity.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2948-2992
1386-4564
2948-2992
1573-7659
DOI:10.1007/s10791-025-09559-5