Macro Ethics Principles for Responsible AI Systems: Taxonomy and Directions

Responsible AI must be able to make or support decisions that consider human values and can be justified by human morals. Accommodating values and morals in responsible decision making is supported by adopting a perspective of macro ethics, which views ethics through a holistic lens incorporating so...

Full description

Saved in:
Bibliographic Details
Published inACM computing surveys Vol. 56; no. 11; pp. 1 - 37
Main Authors Woodgate, Jessica, Ajmeri, Nirav
Format Journal Article
LanguageEnglish
Published New York, NY ACM 01.11.2024
Association for Computing Machinery
Subjects
Online AccessGet full text
ISSN0360-0300
1557-7341
DOI10.1145/3672394

Cover

More Information
Summary:Responsible AI must be able to make or support decisions that consider human values and can be justified by human morals. Accommodating values and morals in responsible decision making is supported by adopting a perspective of macro ethics, which views ethics through a holistic lens incorporating social context. Normative ethical principles inferred from philosophy can be used to methodically reason about ethics and make ethical judgements in specific contexts. Operationalising normative ethical principles thus promotes responsible reasoning under the perspective of macro ethics. We survey AI and computer science literature and develop a taxonomy of 21 normative ethical principles which can be operationalised in AI. We describe how each principle has previously been operationalised, highlighting key themes that AI practitioners seeking to implement ethical principles should be aware of. We envision that this taxonomy will facilitate the development of methodologies to incorporate normative ethical principles in reasoning capacities of responsible AI systems.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0360-0300
1557-7341
DOI:10.1145/3672394