ValueCompass: A Framework of Fundamental Values for Human-AI Alignment

As AI systems become more advanced, ensuring their alignment with a diverse range of individuals and societal values becomes increasingly critical. But how can we capture fundamental human values and assess the degree to which AI systems align with them? We introduce ValueCompass, a framework of fun...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Shen, Hua, Knearem, Tiffany, Ghosh, Reshmi, Yu-Ju, Yang, Mitra, Tanushree, Huang, Yun
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 15.09.2024
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:As AI systems become more advanced, ensuring their alignment with a diverse range of individuals and societal values becomes increasingly critical. But how can we capture fundamental human values and assess the degree to which AI systems align with them? We introduce ValueCompass, a framework of fundamental values, grounded in psychological theory and a systematic review, to identify and evaluate human-AI alignment. We apply ValueCompass to measure the value alignment of humans and language models (LMs) across four real-world vignettes: collaborative writing, education, public sectors, and healthcare. Our findings uncover risky misalignment between humans and LMs, such as LMs agreeing with values like "Choose Own Goals", which are largely disagreed by humans. We also observe values vary across vignettes, underscoring the necessity for context-aware AI alignment strategies. This work provides insights into the design space of human-AI alignment, offering foundations for developing AI that responsibly reflects societal values and ethics.
ISSN:2331-8422