The Problem of the Possibility of an Artificial Moral Agent in the Context of Kant’s Practical Philosophy

The question of whether an artificial moral agent (AMA) is possible implies discussion of a whole range of problems raised by Kant within the framework of practical philosophy that have not exhausted their heuris­tic potential to this day. First, I show the significance of the correlation between mo...

Full description

Saved in:
Bibliographic Details
Published inKantovskiĭ sbornik Vol. 42; no. 4; pp. 225 - 239
Main Author Yulia Sergeevna Fedotova
Format Journal Article
LanguageGerman
Published Immanuel Kant Baltic Federal University 01.01.2023
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The question of whether an artificial moral agent (AMA) is possible implies discussion of a whole range of problems raised by Kant within the framework of practical philosophy that have not exhausted their heuris­tic potential to this day. First, I show the significance of the correlation between moral law and freedom. Since a rational being believes that his/her will is independent of external influences, the will turns out to be governed by the moral law and is autonomous. Morality and freedom are correlated through independence from the external. Accordingly, if the actions of artificial intelligence (AI) are determined by something or someone external to it (by a human), then it does not act morally and freely, but heteronomously. As a consequence of AI’s lack of autonomy, and thus lack of access to the moral law, it does not and cannot have a moral understanding that proceeds from the moral law. Another consequence is that it has no sense of duty, which would follow from the moral law. Thus, moral action becomes impossible for the AMA because it lacks autonomy and moral law, moral understanding and sense of duty. It is concluded that, first, AMA not only cannot be moral, but should not be that, since the inclusion of any moral principle would imply the necessity for the individual to choose it, making the choice of the principle itself immoral. Second, although AI has no will as such, which prima facie makes not only moral but also legal action impossible, it can still act legally in the sense of conforming to legal law, since AI carries a quasi-human will. Thus, it is necessary that the creation of AI should be based not on moral principles, but on legal law that prioritises human freedom and rights.
ISSN:0207-6918
2310-3701
DOI:10.5922/0207-6918-2023-4-12