Moral Decision-making with Artificial Intelligence

Moral decisions - ethical decisions based on values like trustworthiness, respect, justice- could be attributed to humans at first glance. However, in today's information age, artificial intelligence (AI) has been involved in moral decisionmaking processes in many critical areas [1],,[3] from m...

Full description

Saved in:
Bibliographic Details
Published inIEEE International Symposium on Ethics in Science, Technology and Engineering p. 1
Main Author Ocal, Ayse
Format Conference Proceeding
LanguageEnglish
Published IEEE 06.06.2025
Subjects
Online AccessGet full text
ISSN2996-3648
DOI10.1109/ETHICS65148.2025.11098393

Cover

Loading…
More Information
Summary:Moral decisions - ethical decisions based on values like trustworthiness, respect, justice- could be attributed to humans at first glance. However, in today's information age, artificial intelligence (AI) has been involved in moral decisionmaking processes in many critical areas [1],,[3] from medical decisions to autonomous vehicles when moral dilemmas occur [4]. Moral decisions are difficult and critical decisions without objective answers, for humans surrounded by various roles, responsibilities, emotions, and values. Is it beneficial to completely automate these moral decisions? Or can AI assist humans by providing predictions and recommendations before humans make their final decisions? How do people feel about that? Existing literature has mostly focused on scenarios, experiments, and surveys to investigate people's feelings about moral decisionmaking processes involving AI [5],,,[8]. However, the fact that the questions and scenarios in these methods are designed only according to the researchers' preferences may be a limitation in terms of adequately exploring feelings. Social media data, on the other hand, is created spontaneously by users and the majority of society shares their opinions in social media discussions [9],,[11]. In this study, to investigate a large number of individuals' feelings, we harness Reddit data. Reddit is a massive social media platform with 97.2 million daily active users with diverse mindsets [9], [11]. Additionally, a level of anonymity on Reddit is protected, not typically achieved on other social media platforms, thus individuals may feel more secure and express their feelings more honestly about a topic. A pre-trained BERT model was fine-tuned for multiclass text classification using the GoEmotions dataset with 28 emotion categories [12]. A corpus consisting of 20604 comments collected from 15 AI-related subreddits was analyzed by this BERT model to classify feelings in comments. 270 comments were directly linked to moral decision making and the findings are based on these comments. The findings reveal that the most common feelings are approval and curiosity, followed by disapproval and fear. Furthermore, discussions center around two main aspects: 1) AI helps humans making moral decisions by providing predictions, explanations, or recommendations, 2) fully automated AI decisions without human intervention. The first aspect is generally accepted. In other words, the involvement of AI into decision making processes is viewed positively if AI can help make connections faster based on analysis of large amounts of data and aid humans who actually make decisions. The findings also showed that people fear that AI will replace much of executive decision-making; most of the comments regarding the second aspect were in the disapproval category. Future research may therefore focus more on the development of human-machine collaboration frameworks that combine computational information processing capacity and human expertise to solve moral problems semi-autonomously [2], [3]. Access to large amounts of data and more advanced AI models could shape the future of moral decision-making, prompting further research on the subject.
ISSN:2996-3648
DOI:10.1109/ETHICS65148.2025.11098393